Science.gov

Sample records for population-based threshold model

  1. Development of a population-based threshold model of conidial germination for analysing the effects of physiological manipulation on the stress tolerance and infectivity of insect pathogenic fungi.

    PubMed

    Andersen, M; Magan, N; Mead, A; Chandler, D

    2006-09-01

    Entomopathogenic fungi are being used as biocontrol agents of insect pests, but their efficacy can be poor in environments where water availability is reduced. In this study, the potential to improve biocontrol by physiologically manipulating fungal inoculum was investigated. Cultures of Beauveria bassiana, Lecanicillium muscarium, Lecanicillium longisporum, Metarhizium anisopliae and Paecilomyces fumosoroseus were manipulated by growing them under conditions of water stress, which produced conidia with increased concentrations of erythritol. The time-course of germination of conidia at different water activities (water activity, aw) was described using a generalized linear model, and in most cases reducing the water activity of the germination medium delayed the onset of germination without affecting the distribution of germination times. The germination of M. anisopliae, L. muscarium, L. longisporum and P. fumosoroseus was accelerated over a range of aw levels as a result of physiological manipulation. However, the relationship between the effect of physiological manipulation on germination and the osmolyte content of conidia varied according to fungal species. There was a linear relationship between germination rate, expressed as the reciprocal of germination time, and aw of the germination medium, but there was no significant effect of fungal species or physiological manipulation on the aw threshold for germination. In bioassays with M. anisopliae, physiologically manipulated conidia germinated more rapidly on the surface of an insect host, the melon cotton aphid Aphis gossypii, and fungal virulence was increased even when relative humidity was reduced after an initial high period. It is concluded that physiological manipulation may lead to improvements in biocontrol in the field, but choice of fungal species/isolate will be critical. In addition, the population-based threshold model used in this study, which considered germination in terms of physiological

  2. Threshold models in radiation carcinogenesis

    SciTech Connect

    Hoel, D.G.; Li, P.

    1998-09-01

    Cancer incidence and mortality data from the atomic bomb survivors cohort has been analyzed to allow for the possibility of a threshold dose response. The same dose-response models as used in the original papers were fit to the data. The estimated cancer incidence from the fitted models over-predicted the observed cancer incidence in the lowest exposure group. This is consistent with a threshold or nonlinear dose-response at low-doses. Thresholds were added to the dose-response models and the range of possible thresholds is shown for both solid tumor cancers as well as the different leukemia types. This analysis suggests that the A-bomb cancer incidence data agree more with a threshold or nonlinear dose-response model than a purely linear model although the linear model is statistically equivalent. This observation is not found with the mortality data. For both the incidence data and the mortality data the addition of a threshold term significantly improves the fit to the linear or linear-quadratic dose response for both total leukemias and also for the leukemia subtypes of ALL, AML, and CML.

  3. Universal Screening for Emotional and Behavioral Problems: Fitting a Population-Based Model

    ERIC Educational Resources Information Center

    Schanding, G. Thomas, Jr.; Nowell, Kerri P.

    2013-01-01

    Schools have begun to adopt a population-based method to conceptualizing assessment and intervention of students; however, little empirical evidence has been gathered to support this shift in service delivery. The present study examined the fit of a population-based model in identifying students' behavioral and emotional functioning using a…

  4. Population based models of cortical drug response: insights from anaesthesia

    PubMed Central

    Bojak, Ingo; Liley, David T. J.

    2008-01-01

    A great explanatory gap lies between the molecular pharmacology of psychoactive agents and the neurophysiological changes they induce, as recorded by neuroimaging modalities. Causally relating the cellular actions of psychoactive compounds to their influence on population activity is experimentally challenging. Recent developments in the dynamical modelling of neural tissue have attempted to span this explanatory gap between microscopic targets and their macroscopic neurophysiological effects via a range of biologically plausible dynamical models of cortical tissue. Such theoretical models allow exploration of neural dynamics, in particular their modification by drug action. The ability to theoretically bridge scales is due to a biologically plausible averaging of cortical tissue properties. In the resulting macroscopic neural field, individual neurons need not be explicitly represented (as in neural networks). The following paper aims to provide a non-technical introduction to the mean field population modelling of drug action and its recent successes in modelling anaesthesia. PMID:19003456

  5. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  6. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  7. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  8. Models of population-based analyses for data collected from large extended families.

    PubMed

    Wang, Wenyu; Lee, Elisa T; Howard, Barbara V; Fabsitz, Richard R; Devereux, Richard B; MacCluer, Jean W; Laston, Sandra; Comuzzie, Anthony G; Shara, Nawar M; Welty, Thomas K

    2010-12-01

    Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim.

  9. Differential equation models for sharp threshold dynamics.

    PubMed

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations.

  10. Scalable Entity-Based Modeling of Population-Based Systems, Final LDRD Report

    SciTech Connect

    Cleary, A J; Smith, S G; Vassilevska, T K; Jefferson, D R

    2005-01-27

    The goal of this project has been to develop tools, capabilities and expertise in the modeling of complex population-based systems via scalable entity-based modeling (EBM). Our initial focal application domain has been the dynamics of large populations exposed to disease-causing agents, a topic of interest to the Department of Homeland Security in the context of bioterrorism. In the academic community, discrete simulation technology based on individual entities has shown initial success, but the technology has not been scaled to the problem sizes or computational resources of LLNL. Our developmental emphasis has been on the extension of this technology to parallel computers and maturation of the technology from an academic to a lab setting.

  11. Comprehensive, Population-Based Sensitivity Analysis of a Two-Mass Vocal Fold Model

    PubMed Central

    Robertson, Daniel; Zañartu, Matías; Cook, Douglas

    2016-01-01

    Previous vocal fold modeling studies have generally focused on generating detailed data regarding a narrow subset of possible model configurations. These studies can be interpreted to be the investigation of a single subject under one or more vocal conditions. In this study, a broad population-based sensitivity analysis is employed to examine the behavior of a virtual population of subjects and to identify trends between virtual individuals as opposed to investigating a single subject or model instance. Four different sensitivity analysis techniques were used in accomplishing this task. Influential relationships between model input parameters and model outputs were identified, and an exploration of the model’s parameter space was conducted. Results indicate that the behavior of the selected two-mass model is largely dominated by complex interactions, and that few input-output pairs have a consistent effect on the model. Results from the analysis can be used to increase the efficiency of optimization routines of reduced-order models used to investigate voice abnormalities. Results also demonstrate the types of challenges and difficulties to be expected when applying sensitivity analyses to more complex vocal fold models. Such challenges are discussed and recommendations are made for future studies. PMID:26845452

  12. An Activation Threshold Model for Response Inhibition

    PubMed Central

    MacDonald, Hayley J.; McMorland, Angus J. C.; Stinear, Cathy M.; Coxon, James P.; Byblow, Winston D.

    2017-01-01

    Reactive response inhibition (RI) is the cancellation of a prepared response when it is no longer appropriate. Selectivity of RI can be examined by cueing the cancellation of one component of a prepared multi-component response. This substantially delays execution of other components. There is debate regarding whether this response delay is due to a selective neural mechanism. Here we propose a computational activation threshold model (ATM) and test it against a classical “horse-race” model using behavioural and neurophysiological data from partial RI experiments. The models comprise both facilitatory and inhibitory processes that compete upstream of motor output regions. Summary statistics (means and standard deviations) of predicted muscular and neurophysiological data were fit in both models to equivalent experimental measures by minimizing a Pearson Chi-square statistic. The ATM best captured behavioural and neurophysiological dynamics of partial RI. The ATM demonstrated that the observed modulation of corticomotor excitability during partial RI can be explained by nonselective inhibition of the prepared response. The inhibition raised the activation threshold to a level that could not be reached by the original response. This was necessarily followed by an additional phase of facilitation representing a secondary activation process in order to reach the new inhibition threshold and initiate the executed component of the response. The ATM offers a mechanistic description of the neural events underlying RI, in which partial movement cancellation results from a nonselective inhibitory event followed by subsequent initiation of a new response. The ATM provides a framework for considering and exploring the neuroanatomical constraints that underlie RI. PMID:28085907

  13. Toxicogenetics: population-based testing of drug and chemical safety in mouse models.

    PubMed

    Rusyn, Ivan; Gatti, Daniel M; Wiltshire, Timothy; Wilshire, Timothy; Kleeberger, Steven R; Threadgill, David W

    2010-08-01

    The rapid decline in the cost of dense genotyping is paving the way for new DNA sequence-based laboratory tests to move quickly into clinical practice, and to ultimately help realize the promise of 'personalized' therapies. These advances are based on the growing appreciation of genetics as an important dimension in science and the practice of investigative pharmacology and toxicology. On the clinical side, both the regulators and the pharmaceutical industry hope that the early identification of individuals prone to adverse drug effects will keep advantageous medicines on the market for the benefit of the vast majority of prospective patients. On the environmental health protection side, there is a clear need for better science to define the range and causes of susceptibility to adverse effects of chemicals in the population, so that the appropriate regulatory limits are established. In both cases, most of the research effort is focused on genome-wide association studies in humans where de novo genotyping of each subject is required. At the same time, the power of population-based preclinical safety testing in rodent models (e.g., mouse) remains to be fully exploited. Here, we highlight the approaches available to utilize the knowledge of DNA sequence and genetic diversity of the mouse as a species in mechanistic toxicology research. We posit that appropriate genetically defined mouse models may be combined with the limited data from human studies to not only discover the genetic determinants of susceptibility, but to also understand the molecular underpinnings of toxicity.

  14. Dynamic model of the threshold displacement energy

    NASA Astrophysics Data System (ADS)

    Kupchishin, A. I.; Kupchishin, A. A.

    2017-01-01

    A dynamic (cascade-probability) model for calculating the threshold displacement energy of knocked-out atoms (Ed) was proposed taking into account the influence of the instability zone (spontaneous recombination). General expression was recorded for Ed depending on the formation energy of interstitial atoms Ef and vacancies Ei, on the energy transfer coefficient α and the number of interactions i needed to move the atom out of the instability zone. The parameters of primary particles were calculated. Comparison of calculations with experimental data gives a satisfactory agreement.

  15. Solution of an infection model near threshold

    NASA Astrophysics Data System (ADS)

    Kessler, David A.; Shnerb, Nadav M.

    2007-07-01

    We study the susceptible-infected-recovered model of epidemics in the vicinity of the threshold infectivity. We derive the distribution of total outbreak size in the limit of large population size N . This is accomplished by mapping the problem to the first passage time of a random walker subject to a drift that increases linearly with time. We recover the scaling results of Ben-Naim and Krapivsky that the effective maximal size of the outbreak scales as N2/3 , with the average scaling as N1/3 , with an explicit form for the scaling function.

  16. Population based model of human embryonic stem cell (hESC) differentiation during endoderm induction.

    PubMed

    Task, Keith; Jaramillo, Maria; Banerjee, Ipsita

    2012-01-01

    The mechanisms by which human embryonic stem cells (hESC) differentiate to endodermal lineage have not been extensively studied. Mathematical models can aid in the identification of mechanistic information. In this work we use a population-based modeling approach to understand the mechanism of endoderm induction in hESC, performed experimentally with exposure to Activin A and Activin A supplemented with growth factors (basic fibroblast growth factor (FGF2) and bone morphogenetic protein 4 (BMP4)). The differentiating cell population is analyzed daily for cellular growth, cell death, and expression of the endoderm proteins Sox17 and CXCR4. The stochastic model starts with a population of undifferentiated cells, wherefrom it evolves in time by assigning each cell a propensity to proliferate, die and differentiate using certain user defined rules. Twelve alternate mechanisms which might describe the observed dynamics were simulated, and an ensemble parameter estimation was performed on each mechanism. A comparison of the quality of agreement of experimental data with simulations for several competing mechanisms led to the identification of one which adequately describes the observed dynamics under both induction conditions. The results indicate that hESC commitment to endoderm occurs through an intermediate mesendoderm germ layer which further differentiates into mesoderm and endoderm, and that during induction proliferation of the endoderm germ layer is promoted. Furthermore, our model suggests that CXCR4 is expressed in mesendoderm and endoderm, but is not expressed in mesoderm. Comparison between the two induction conditions indicates that supplementing FGF2 and BMP4 to Activin A enhances the kinetics of differentiation than Activin A alone. This mechanistic information can aid in the derivation of functional, mature cells from their progenitors. While applied to initial endoderm commitment of hESC, the model is general enough to be applicable either to a system of

  17. A threshold model of investor psychology

    NASA Astrophysics Data System (ADS)

    Cross, Rod; Grinfeld, Michael; Lamba, Harbir; Seaman, Tim

    2005-08-01

    We introduce a class of agent-based market models founded upon simple descriptions of investor psychology. Agents are subject to various psychological tensions induced by market conditions and endowed with a minimal ‘personality’. This personality consists of a threshold level for each of the tensions being modeled, and the agent reacts whenever a tension threshold is reached. This paper considers an elementary model including just two such tensions. The first is ‘cowardice’, which is the stress caused by remaining in a minority position with respect to overall market sentiment and leads to herding-type behavior. The second is ‘inaction’, which is the increasing desire to act or re-evaluate one's investment position. There is no inductive learning by agents and they are only coupled via the global market price and overall market sentiment. Even incorporating just these two psychological tensions, important stylized facts of real market data, including fat-tails, excess kurtosis, uncorrelated price returns and clustered volatility over the timescale of a few days are reproduced. By then introducing an additional parameter that amplifies the effect of externally generated market noise during times of extreme market sentiment, long-time volatility correlations can also be recovered.

  18. Theoretical model for FCGR near the threshold

    NASA Astrophysics Data System (ADS)

    Lanteigne, Jacques; Baïlon, Jean-Paul

    1981-03-01

    A theoretical model for fatigue crack growth rate at low and near threshold stress intensity factor is developed. The crack tip is assumed to be a semicircular notch of radius ρ and incremental crack growth occurs along a distance 4ρ ahead of the crack tip. After analysis of the stress and strain distribution ahead of the crack tip, a relationship between the strain range and the stress intensity range is proposed. It is then assumed that Manson-Coffin cumulative rule can be applied to a region of length 4ρ from the crack tip, where strain reversal occurs. Finally, a theoretical equation giving the fatigue crack growth rate is obtained and applied to several materials (316L stainless steel, 300M alloy steel, 70-30 α brass, 2618A and 7025 aluminum alloys). It is found that the model can be used to correlate fatigue crack growth rates with the mechanical properties of the materials, and to determine the threshold stress intensity factor, once the crack tip radius α is obtained from the previous data.

  19. Quantitative high-throughput screening for chemical toxicity in a population-based in vitro model.

    PubMed

    Lock, Eric F; Abdo, Nour; Huang, Ruili; Xia, Menghang; Kosyk, Oksana; O'Shea, Shannon H; Zhou, Yi-Hui; Sedykh, Alexander; Tropsha, Alexander; Austin, Christopher P; Tice, Raymond R; Wright, Fred A; Rusyn, Ivan

    2012-04-01

    A shift in toxicity testing from in vivo to in vitro may efficiently prioritize compounds, reveal new mechanisms, and enable predictive modeling. Quantitative high-throughput screening (qHTS) is a major source of data for computational toxicology, and our goal in this study was to aid in the development of predictive in vitro models of chemical-induced toxicity, anchored on interindividual genetic variability. Eighty-one human lymphoblast cell lines from 27 Centre d'Etude du Polymorphisme Humain trios were exposed to 240 chemical substances (12 concentrations, 0.26nM-46.0μM) and evaluated for cytotoxicity and apoptosis. qHTS screening in the genetically defined population produced robust and reproducible results, which allowed for cross-compound, cross-assay, and cross-individual comparisons. Some compounds were cytotoxic to all cell types at similar concentrations, whereas others exhibited interindividual differences in cytotoxicity. Specifically, the qHTS in a population-based human in vitro model system has several unique aspects that are of utility for toxicity testing, chemical prioritization, and high-throughput risk assessment. First, standardized and high-quality concentration-response profiling, with reproducibility confirmed by comparison with previous experiments, enables prioritization of chemicals for variability in interindividual range in cytotoxicity. Second, genome-wide association analysis of cytotoxicity phenotypes allows exploration of the potential genetic determinants of interindividual variability in toxicity. Furthermore, highly significant associations identified through the analysis of population-level correlations between basal gene expression variability and chemical-induced toxicity suggest plausible mode of action hypotheses for follow-up analyses. We conclude that as the improved resolution of genetic profiling can now be matched with high-quality in vitro screening data, the evaluation of the toxicity pathways and the effects of

  20. Toxicogenetics: population-based testing of drug and chemical safety in mouse models

    PubMed Central

    Rusyn, Ivan; Gatti, Daniel M; Wiltshire, Timothy; Kleeberger, Steven R; Threadgill, David W

    2011-01-01

    The rapid decline in the cost of dense genotyping is paving the way for new DNA sequence-based laboratory tests to move quickly into clinical practice, and to ultimately help realize the promise of ‘personalized’ therapies. These advances are based on the growing appreciation of genetics as an important dimension in science and the practice of investigative pharmacology and toxicology. On the clinical side, both the regulators and the pharmaceutical industry hope that the early identification of individuals prone to adverse drug effects will keep advantageous medicines on the market for the benefit of the vast majority of prospective patients. On the environmental health protection side, there is a clear need for better science to define the range and causes of susceptibility to adverse effects of chemicals in the population, so that the appropriate regulatory limits are established. In both cases, most of the research effort is focused on genome-wide association studies in humans where de novo genotyping of each subject is required. At the same time, the power of population-based preclinical safety testing in rodent models (e.g., mouse) remains to be fully exploited. Here, we highlight the approaches available to utilize the knowledge of DNA sequence and genetic diversity of the mouse as a species in mechanistic toxicology research. We posit that appropriate genetically defined mouse models may be combined with the limited data from human studies to not only discover the genetic determinants of susceptibility, but to also understand the molecular underpinnings of toxicity. PMID:20704464

  1. Analytical threshold voltage model for strained silicon GAA-TFET

    NASA Astrophysics Data System (ADS)

    Kang, Hai-Yan; Hu, Hui-Yong; Wang, Bin

    2016-11-01

    Tunnel field effect transistors (TFETs) are promising devices for low power applications. An analytical threshold voltage model, based on the channel surface potential and electric field obtained by solving the 2D Poisson’s equation, for strained silicon gate all around TFETs is proposed. The variation of the threshold voltage with device parameters, such as the strain (Ge mole fraction x), gate oxide thickness, gate oxide permittivity, and channel length has also been investigated. The threshold voltage model is extracted using the peak transconductance method and is verified by good agreement with the results obtained from the TCAD simulation. Project supported by the National Natural Science Foundation of China (Grant No. 61474085).

  2. Simulation of Population-Based Commuter Exposure to NO2 Using Different Air Pollution Models

    PubMed Central

    Ragettli, Martina S.; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E.; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C.

    2014-01-01

    We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m−3, range: 21–61) than with a dispersion model with a lower resolution (39 ± 5 µg m−3; range: 24–51), and a land use regression model (41 ± 5 µg m−3; range: 24–54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas. PMID:24823664

  3. Simulation of population-based commuter exposure to NO₂ using different air pollution models.

    PubMed

    Ragettli, Martina S; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C

    2014-05-12

    We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m(-3), range: 21-61) than with a dispersion model with a lower resolution (39 ± 5 µg m(-3); range: 24-51), and a land use regression model (41 ± 5 µg m(-3); range: 24-54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas.

  4. Uncertainties in the Modelled CO2 Threshold for Antarctic Glaciation

    NASA Technical Reports Server (NTRS)

    Gasson, E.; Lunt, D. J.; DeConto, R.; Goldner, A.; Heinemann, M.; Huber, M.; LeGrande, A. N.; Pollard, D.; Sagoo, N.; Siddall, M.; Winguth, A.; Valdes, P. J.

    2014-01-01

    frequently cited atmospheric CO2 threshold for the onset of Antarctic glaciation of approximately780 parts per million by volume is based on the study of DeConto and Pollard (2003) using an ice sheet model and the GENESIS climate model. Proxy records suggest that atmospheric CO2 concentrations passed through this threshold across the Eocene-Oligocene transition approximately 34 million years. However, atmospheric CO2 concentrations may have been close to this threshold earlier than this transition, which is used by some to suggest the possibility of Antarctic ice sheets during the Eocene. Here we investigate the climate model dependency of the threshold for Antarctic glaciation by performing offline ice sheet model simulations using the climate from 7 different climate models with Eocene boundary conditions (HadCM3L, CCSM3, CESM1.0, GENESIS, FAMOUS, ECHAM5 and GISS_ER). These climate simulations are sourced from a number of independent studies, and as such the boundary conditions, which are poorly constrained during the Eocene, are not identical between simulations. The results of this study suggest that the atmospheric CO2 threshold for Antarctic glaciation is highly dependent on the climate model used and the climate model configuration. A large discrepancy between the climate model and ice sheet model grids for some simulations leads to a strong sensitivity to the lapse rate parameter.

  5. Evaluation of a spatially resolved forest fire smoke model for population-based epidemiologic exposure assessment.

    PubMed

    Yao, Jiayun; Eyamie, Jeff; Henderson, Sarah B

    2016-01-01

    Exposure to forest fire smoke (FFS) is associated with multiple adverse health effects, mostly respiratory. Findings for cardiovascular effects have been inconsistent, possibly related to the limitations of conventional methods to assess FFS exposure. In previous work, we developed an empirical model to estimate smoke-related fine particulate matter (PM2.5) for all populated areas in British Columbia (BC), Canada. Here, we evaluate the utility of our model by comparing epidemiologic associations between modeled and measured PM2.5. For each local health area (LHA), we used Poisson regression to estimate the effects of PM2.5 estimates and measurements on counts of medication dispensations and outpatient physician visits. We then used meta-regression to estimate the overall effects. A 10 μg/m(3) increase in modeled PM2.5 was associated with increased sabutamol dispensations (RR=1.04, 95% CI 1.03-1.06), and physician visits for asthma (1.06, 1.04-1.08), COPD (1.02, 1.00-1.03), lower respiratory infections (1.03, 1.00-1.05), and otitis media (1.05, 1.03-1.07), all comparable to measured PM2.5. Effects on cardiovascular outcomes were only significant using model estimates in all LHAs during extreme fire days. This suggests that the exposure model is a promising tool for increasing the power of epidemiologic studies to detect the health effects of FFS via improved spatial coverage and resolution.

  6. Octave-Band Thresholds for Modeled Reverberant Fields

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).

  7. Identifying genetic loci associated with antidepressant drug response with drug-gene interaction models in a population-based study.

    PubMed

    Noordam, Raymond; Direk, Nese; Sitlani, Colleen M; Aarts, Nikkie; Tiemeier, Henning; Hofman, Albert; Uitterlinden, André G; Psaty, Bruce M; Stricker, Bruno H; Visser, Loes E

    2015-03-01

    It has been difficult to identify genes affecting drug response to Selective Serotonin Reuptake Inhibitors (SSRIs). We used multiple cross-sectional assessments of depressive symptoms in a population-based study to identify potential genetic interactions with SSRIs as a model to study genetic variants associated with SSRI response. This study, embedded in the prospective Rotterdam Study, included all successfully genotyped participants with data on depressive symptoms (CES-D scores). We used repeated measurement models to test multiplicative interaction between genetic variants and use of SSRIs on repeated CESD scores. Besides a genome-wide analysis, we also performed an analysis which was restricted to genes related to the serotonergic signaling pathway. A total of 273 out of 14,937 assessments of depressive symptoms in 6443 participants, use of an SSRI was recorded. After correction for multiple testing, no plausible loci were identified in the genome-wide analysis. However, among the top 10 independent loci with the lowest p-values, findings within two genes (FSHR and HMGB4) might be of interest. Among 26 genes related to the serotonergic signaling pathway, the rs6108160 polymorphism in the PLCB1 gene reached statistical significance after Bonferroni correction (p-value = 8.1e-5). Also, the widely replicated 102C > T polymorphism in the HTR2A gene showed a statistically significant drug-gene interaction with SSRI use. Therefore, the present study suggests that drug-gene interaction models on (repeated) cross-sectional assessments of depressive symptoms in a population-based study can identify potential loci that may influence SSRI response.

  8. Mathematical model for adaptive evolution of populations based on a complex domain

    PubMed Central

    Ibrahim, Rabha W.; Ahmad, M.Z.; Al-Janaby, Hiba F.

    2015-01-01

    A mutation is ultimately essential for adaptive evolution in all populations. It arises all the time, but is mostly fixed by enzymes. Further, most do consider that the evolution mechanism is by a natural assortment of variations in organisms in line for random variations in their DNA, and the suggestions for this are overwhelming. The altering of the construction of a gene, causing a different form that may be communicated to succeeding generations, produced by the modification of single base units in DNA, or the deletion, insertion, or rearrangement of larger units of chromosomes or genes. This altering is called a mutation. In this paper, a mathematical model is introduced to this reality. The model describes the time and space for the evolution. The tool is based on a complex domain for the space. We show that the evolution is distributed with the hypergeometric function. The Boundedness of the evolution is imposed by utilizing the Koebe function. PMID:26858564

  9. Analysis of amyotrophic lateral sclerosis as a multistep process: a population-based modelling study

    PubMed Central

    Al-Chalabi, Ammar; Calvo, Andrea; Chio, Adriano; Colville, Shuna; Ellis, Cathy M; Hardiman, Orla; Heverin, Mark; Howard, Robin S; Huisman, Mark H B; Keren, Noa; Leigh, P Nigel; Mazzini, Letizia; Mora, Gabriele; Orrell, Richard W; Rooney, James; Scott, Kirsten M; Scotton, William J; Seelen, Meinie; Shaw, Christopher E; Sidle, Katie S; Swingler, Robert; Tsuda, Miho; Veldink, Jan H; Visser, Anne E; van den Berg, Leonard H; Pearce, Neil

    2014-01-01

    Summary Background Amyotrophic lateral sclerosis shares characteristics with some cancers, such as onset being more common in later life, progression usually being rapid, the disease affecting a particular cell type, and showing complex inheritance. We used a model originally applied to cancer epidemiology to investigate the hypothesis that amyotrophic lateral sclerosis is a multistep process. Methods We generated incidence data by age and sex from amyotrophic lateral sclerosis population registers in Ireland (registration dates 1995–2012), the Netherlands (2006–12), Italy (1995–2004), Scotland (1989–98), and England (2002–09), and calculated age and sex-adjusted incidences for each register. We regressed the log of age-specific incidence against the log of age with least squares regression. We did the analyses within each register, and also did a combined analysis, adjusting for register. Findings We identified 6274 cases of amyotrophic lateral sclerosis from a catchment population of about 34 million people. We noted a linear relationship between log incidence and log age in all five registers: England r2=0·95, Ireland r2=0·99, Italy r2=0·95, the Netherlands r2=0·99, and Scotland r2=0·97; overall r2=0·99. All five registers gave similar estimates of the linear slope ranging from 4·5 to 5·1, with overlapping confidence intervals. The combination of all five registers gave an overall slope of 4·8 (95% CI 4·5–5·0), with similar estimates for men (4·6, 4·3–4·9) and women (5·0, 4·5–5·5). Interpretation A linear relationship between the log incidence and log age of onset of amyotrophic lateral sclerosis is consistent with a multistage model of disease. The slope estimate suggests that amyotrophic lateral sclerosis is a six-step process. Identification of these steps could lead to preventive and therapeutic avenues. Funding UK Medical Research Council; UK Economic and Social Research Council; Ireland Health Research Board; The

  10. Modeling Associative Recognition: A Comparison of Two-High-Threshold, Two-High-Threshold Signal Detection, and Mixture Distribution Models

    ERIC Educational Resources Information Center

    Macho, Siegfried

    2004-01-01

    A 2-high-threshold signal detection (HTSDT) model, a mixture distribution (SON) model, and 2-highthreshold (HT) models with responses distributed over 1 or several response categories were fit to results of 6 experiments from 2 studies on associative recognition: R. Kelley and J. T. Wixted (2001) and A. P. Yonelinas (1997). HTSDT assumes that…

  11. Setting conservation management thresholds using a novel participatory modeling approach.

    PubMed

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future.

  12. Performance and Cost-Effectiveness of Computed Tomography Lung Cancer Screening Scenarios in a Population-Based Setting: A Microsimulation Modeling Analysis in Ontario, Canada

    PubMed Central

    ten Haaf, Kevin; Tammemägi, Martin C.; Bondy, Susan J.; van der Aalst, Carlijn M.; Gu, Sumei; de Koning, Harry J.

    2017-01-01

    Background The National Lung Screening Trial (NLST) results indicate that computed tomography (CT) lung cancer screening for current and former smokers with three annual screens can be cost-effective in a trial setting. However, the cost-effectiveness in a population-based setting with >3 screening rounds is uncertain. Therefore, the objective of this study was to estimate the cost-effectiveness of lung cancer screening in a population-based setting in Ontario, Canada, and evaluate the effects of screening eligibility criteria. Methods and Findings This study used microsimulation modeling informed by various data sources, including the Ontario Health Insurance Plan (OHIP), Ontario Cancer Registry, smoking behavior surveys, and the NLST. Persons, born between 1940 and 1969, were examined from a third-party health care payer perspective across a lifetime horizon. Starting in 2015, 576 CT screening scenarios were examined, varying by age to start and end screening, smoking eligibility criteria, and screening interval. Among the examined outcome measures were lung cancer deaths averted, life-years gained, percentage ever screened, costs (in 2015 Canadian dollars), and overdiagnosis. The results of the base-case analysis indicated that annual screening was more cost-effective than biennial screening. Scenarios with eligibility criteria that required as few as 20 pack-years were dominated by scenarios that required higher numbers of accumulated pack-years. In general, scenarios that applied stringent smoking eligibility criteria (i.e., requiring higher levels of accumulated smoking exposure) were more cost-effective than scenarios with less stringent smoking eligibility criteria, with modest differences in life-years gained. Annual screening between ages 55–75 for persons who smoked ≥40 pack-years and who currently smoke or quit ≤10 y ago yielded an incremental cost-effectiveness ratio of $41,136 Canadian dollars ($33,825 in May 1, 2015, United States dollars) per

  13. On the probability summation model for laser-damage thresholds

    NASA Astrophysics Data System (ADS)

    Clark, Clifton D.; Buffington, Gavin D.

    2016-01-01

    This paper explores the probability summation model in an attempt to provide insight to the model's utility and ultimately its validity. The model is a statistical description of multiple-pulse (MP) damage trends. It computes the probability of n pulses causing damage from knowledge of the single-pulse dose-response curve. Recently, the model has been used to make a connection between the observed n trends in MP damage thresholds for short pulses (<10 μs) and experimental uncertainties, suggesting that the observed trend is an artifact of experimental methods. We will consider the correct application of the model in this case. We also apply this model to the spot-size dependence of short pulse damage thresholds, which has not been done previously. Our results predict that the damage threshold trends with respect to the irradiated area should be similar to the MP damage threshold trends, and that observed spot-size dependence for short pulses seems to display this trend, which cannot be accounted for by the thermal models.

  14. Cascades in the Threshold Model for varying system sizes

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy

    2015-03-01

    A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.

  15. Radiation-induced aging of PDMS Elastomer TR-55: a summary of constitutive, mesoscale, and population-based models

    SciTech Connect

    Maiti, A; Weisgraber, T. H.; Dinh, L. N.

    2016-11-16

    Filled and cross-linked elastomeric rubbers are versatile network materials with a multitude of applications ranging from artificial organs and biomedical devices to cushions, coatings, adhesives, interconnects, and seismic-isolation-, thermal-, and electrical barriers External factors like mechanical stress, temperature fluctuations, or radiation are known to create chemical changes in such materials that can directly affect the molecular weight distribution (MWD) of the polymer between cross-links and alter the structural and mechanical properties. From a Materials Science point of view it is highly desirable to understand, effect, and manipulate such property changes in a controlled manner. In this report we summarize our modeling efforts on a polysiloxane elastomer TR-55, which is an important component in several of our systems, and representative of a wide class of filled rubber materials. The primary aging driver in this work has been γ-radiation, and a variety of modeling approaches have been employed, including constitutive, mesoscale, and population-based models. The work utilizes diverse experimental data, including mechanical stress-strain and compression set measurements, as well as MWD measurements using multiquantum NMR.

  16. Modeling the Interactions Between Multiple Crack Closure Mechanisms at Threshold

    NASA Technical Reports Server (NTRS)

    Newman, John A.; Riddell, William T.; Piascik, Robert S.

    2003-01-01

    A fatigue crack closure model is developed that includes interactions between the three closure mechanisms most likely to occur at threshold; plasticity, roughness, and oxide. This model, herein referred to as the CROP model (for Closure, Roughness, Oxide, and Plasticity), also includes the effects of out-of plane cracking and multi-axial loading. These features make the CROP closure model uniquely suited for, but not limited to, threshold applications. Rough cracks are idealized here as two-dimensional sawtooths, whose geometry induces mixed-mode crack- tip stresses. Continuum mechanics and crack-tip dislocation concepts are combined to relate crack face displacements to crack-tip loads. Geometric criteria are used to determine closure loads from crack-face displacements. Finite element results, used to verify model predictions, provide critical information about the locations where crack closure occurs.

  17. Predictors of the nicotine reinforcement threshold, compensation, and elasticity of demand in a rodent model of nicotine reduction policy*

    PubMed Central

    Grebenstein, Patricia E.; Burroughs, Danielle; Roiko, Samuel A.; Pentel, Paul R.; LeSage, Mark G.

    2015-01-01

    Background The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. Methods The present study examined these issues in a rodent nicotine self- administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Results Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. Conclusions These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. PMID:25891231

  18. The interplay between cooperativity and diversity in model threshold ensembles.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafe, Salvador

    2014-10-06

    The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks.

  19. The interplay between cooperativity and diversity in model threshold ensembles

    PubMed Central

    Cervera, Javier; Manzanares, José A.; Mafe, Salvador

    2014-01-01

    The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516

  20. Selection Strategies for Social Influence in the Threshold Model

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  1. Threshold dynamics of a malaria transmission model in periodic environment

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Teng, Zhidong; Zhang, Tailei

    2013-05-01

    In this paper, we propose a malaria transmission model with periodic environment. The basic reproduction number R0 is computed for the model and it is shown that the disease-free periodic solution of the model is globally asymptotically stable when R0<1, that is, the disease goes extinct when R0<1, while the disease is uniformly persistent and there is at least one positive periodic solution when R0>1. It indicates that R0 is the threshold value determining the extinction and the uniform persistence of the disease. Finally, some examples are given to illustrate the main theoretical results. The numerical simulations show that, when the disease is uniformly persistent, different dynamic behaviors may be found in this model, such as the global attractivity and the chaotic attractor.

  2. Semiautomatic bladder segmentation on CBCT using a population-based model for multiple-plan ART of bladder cancer

    NASA Astrophysics Data System (ADS)

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-12-01

    The aim of this study is to develop a novel semiautomatic bladder segmentation approach for selecting the appropriate plan from the library of plans for a multiple-plan adaptive radiotherapy (ART) procedure. A population-based statistical bladder model was first built from a training data set (95 bladder contours from 8 patients). This model was then used as constraint to segment the bladder in an independent validation data set (233 CBCT scans from the remaining 22 patients). All 3D bladder contours were converted into parametric surface representations using spherical harmonic expansion. Principal component analysis (PCA) was applied in the spherical harmonic-based shape parameter space to calculate the major variation of bladder shapes. The number of dominating PCA modes was chosen such that 95% of the total shape variation of the training data set was described. The automatic segmentation started from the bladder contour of the planning CT of each patient, which was modified by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to best fit the bladder boundary in the localization CBCT image. A cost function was defined to measure the goodness of fit of the segmentation on the localization CBCT image. The segmentation was obtained by minimizing this cost function using a simplex optimizer. After automatic segmentation, a fast manual correction method was provided to correct those bladders (parts) that were poorly segmented. Volume- and distance-based metrics and the accuracy of plan selection from multiple plans were evaluated to quantify the performance of the automatic and semiautomatic segmentation methods. For the training data set, only seven PCA modes were needed to represent 95% of the bladder shape variation. The mean CI overlap and residual error (SD) of automatic bladder segmentation over all of the validation data were 70.5% and 0.39 cm, respectively. The agreement of plan

  3. Diagnosis of Parkinson’s disease on the basis of clinical–genetic classification: a population-based modelling study

    PubMed Central

    Nalls, Mike A.; McLean, Cory Y.; Rick, Jacqueline; Eberly, Shirley; Hutten, Samantha J.; Gwinn, Katrina; Sutherland, Margaret; Martinez, Maria; Heutink, Peter; Williams, Nigel; Hardy, John; Gasser, Thomas; Brice, Alexis; Price, T. Ryan; Nicolas, Aude; Keller, Margaux F.; Molony, Cliona; Gibbs, J. Raphael; Chen-Plotkin, Alice; Suh, Eunran; Letson, Christopher; Fiandaca, Massimo S.; Mapstone, Mark; Federoff, Howard J.; Noyce, Alastair J; Morris, Huw; Van Deerlin, Vivianna M.; Weintraub, Daniel; Zabetian, Cyrus; Hernandez, Dena G.; Lesage, Suzanne; Mullins, Meghan; Conley, Emily Drabant; Northover, Carrie; Frasier, Mark; Marek, Ken; Day-Williams, Aaron G.; Stone, David J.; Ioannidis, John P. A.; Singleton, Andrew B.

    2015-01-01

    Background Accurate diagnosis and early detection of complex disease has the potential to be of enormous benefit to clinical trialists, patients, and researchers alike. We sought to create a non-invasive, low-cost, and accurate classification model for diagnosing Parkinson’s disease risk to serve as a basis for future disease prediction studies in prospective longitudinal cohorts. Methods We developed a simple disease classifying model within 367 patients with Parkinson’s disease and phenotypically typical imaging data and 165 controls without neurological disease of the Parkinson’s Progression Marker Initiative (PPMI) study. Olfactory function, genetic risk, family history of PD, age and gender were algorithmically selected as significant contributors to our classifying model. This model was developed using the PPMI study then tested in 825 patients with Parkinson’s disease and 261 controls from five independent studies with varying recruitment strategies and designs including the Parkinson’s Disease Biomarkers Program (PDBP), Parkinson’s Associated Risk Study (PARS), 23andMe, Longitudinal and Biomarker Study in PD (LABS-PD), and Morris K. Udall Parkinson’s Disease Research Center of Excellence (Penn-Udall). Findings Our initial model correctly distinguished patients with Parkinson’s disease from controls at an area under the curve (AUC) of 0.923 (95% CI = 0.900 – 0.946) with high sensitivity (0.834, 95% CI = 0.711 – 0.883) and specificity (0.903, 95% CI = 0.824 – 0.946) in PPMI at its optimal AUC threshold (0.655). The model is also well-calibrated with all Hosmer-Lemeshow simulations suggesting that when parsed into random subgroups, the actual data mirrors that of the larger expected data, demonstrating that our model is robust and fits well. Likewise external validation shows excellent classification of PD with AUCs of 0.894 in PDBP, 0.998 in PARS, 0.955 in 23andMe, 0.929 in LABS-PD, and 0.939 in Penn-Udall. Additionally, when our model

  4. A threshold model analysis of deafness in Dalmatians.

    PubMed

    Famula, T R; Oberbauer, A M; Sousa, C A

    1996-09-01

    To elucidate the inheritance of deafness in Dalmatian dogs, 825 dogs in 111 litters were evaluated for abnormalities in hearing through the brainstem auditory evoked response (BAER). Recorded along with their quality of hearing (normal, unilaterally deaf, or bilaterally deaf) were the sex, coat color, eye color and the presence or absence of a color patch. The analysis considered deafness an ordered categorical trait in a threshold model. The underlying, unobservable continuous variate of the threshold model was assumed to be a linear function of sex of dog, coat color (black or liver and white), color patch (presence or absence), eye color, the deafness phenotype of the parents and a random family effect. Twenty-six percent of dogs were deaf in at least one ear. Eye color, color patch, sex and the hearing status of the parents were all significant contributions to deafness. The heritability of deafness, on the continuous unobservable scale, was 0.21. This value was computed after correction for eye color, color patch, parental hearing status and sex, implying that significant genetic variation exists beyond the contribution of several single loci.

  5. Model to Estimate Threshold Mechanical Stability of Lower Lateral Cartilage

    PubMed Central

    Kim, James Hakjune; Hamamoto, Ashley; Kiyohara, Nicole; Wong, Brian J. F.

    2015-01-01

    IMPORTANCE In rhinoplasty, techniques used to alter the shape of the nasal tip often compromise the structural stability of the cartilage framework in the nose. Determining the minimum threshold level of cartilage stiffness required to maintain long-term structural stability is a critical aspect in performing these surgical maneuvers. OBJECTIVE To quantify the minimum threshold mechanical stability (elastic modulus) of lower lateral cartilage (LLC) according to expert opinion. METHODS Five anatomically correct LLC phantoms were made from urethane via a 3-dimensional computer modeling and injection molding process. All 5 had identical geometry but varied in stiffness along the intermediate crural region (0.63–30.6 MPa). DESIGN, SETTING, AND PARTICIPANTS A focus group of experienced rhinoplasty surgeons (n = 33) was surveyed at a regional professional meeting on October 25, 2013. Each survey participant was presented the 5 phantoms in a random order and asked to arrange the phantoms in order of increasing stiffness based on their sense of touch. Then, they were asked to select a single phantom out of the set that they believed to have the minimum acceptable mechanical stability for LLC to maintain proper form and function. MAIN OUTCOMES AND MEASURES A binary logistic regression was performed to calculate the probability of mechanical acceptability as a function of the elastic modulus of the LLC based on survey data. A Hosmer-Lemeshow test was performed to measure the goodness of fit between the logistic regression and survey data. The minimum threshold mechanical stability for LLC was taken at a 50% acceptability rating. RESULTS Phantom 4 was selected most frequently by the participants as having the minimum acceptable stiffness for LLC intermediate care. The minimum threshold mechanical stability for LLC was determined to be 3.65 MPa. The Hosmer-Lemeshow test revealed good fit between the logistic regression and survey data ( χ32=0.92 , P = .82). CONCLUSIONS AND

  6. Terrestrial Microgravity Model and Threshold Gravity Simulation using Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successfully simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars. The paper will discuss experiments md modeling work to date in support of this project.

  7. Stylized facts from a threshold-based heterogeneous agent model

    NASA Astrophysics Data System (ADS)

    Cross, R.; Grinfeld, M.; Lamba, H.; Seaman, T.

    2007-05-01

    A class of heterogeneous agent models is investigated where investors switch trading position whenever their motivation to do so exceeds some critical threshold. These motivations can be psychological in nature or reflect behaviour suggested by the efficient market hypothesis (EMH). By introducing different propensities into a baseline model that displays EMH behaviour, one can attempt to isolate their effects upon the market dynamics. The simulation results indicate that the introduction of a herding propensity results in excess kurtosis and power-law decay consistent with those observed in actual return distributions, but not in significant long-term volatility correlations. Possible alternatives for introducing such long-term volatility correlations are then identified and discussed.

  8. Terrestrial Microgravity Model and Threshold Gravity Simulation sing Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successiblly simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars.

  9. Phase of care prevalence for prostate cancer in New South Wales, Australia: A population-based modelling study

    PubMed Central

    Luo, Qingwei; Smith, David P.; Clements, Mark S.; Patel, Manish I.; O’Connell, Dianne L.

    2017-01-01

    Objective To develop a method for estimating the future numbers of prostate cancer survivors requiring different levels of care. Design, setting and participants Analysis of population-based cancer registry data for prostate cancer cases (aged 18–84 years) diagnosed in 1996–2007, and a linked dataset with hospital admission data for men with prostate cancer diagnosed during 2005–2007 in New South Wales (NSW), Australia. Methods Cancer registry data (1996–2007) were used to project complete prostate cancer prevalence in NSW, Australia for 2008–2017, and treatment information from hospital records (2005–2007) was used to estimate the inpatient care needs during the first year after diagnosis. The projected complete prevalence was divided into care needs-based groups. We first divided the cohort into two groups based on patient’s age (<75 and 75–84 years). The younger cohort was further divided into initial care and monitoring phases. Cause of death data were used as a proxy for patients requiring last year of life prostate cancer care. Finally, episode data were used to estimate the future number of cases with metastatic progression. Results Of the estimated total of 60,910 men with a previous diagnosis of prostate cancer in 2017, the largest groups will be older patients (52.0%) and younger men who require monitoring (42.5%). If current treatment patterns continue, in the first year post-diagnosis 41% (1380) of patients (<75 years) will have a radical prostatectomy, and 52.6% (1752) will be likely to have either active surveillance, external beam radiotherapy or androgen deprivation therapy. About 3% will require care for subsequent metastases, and 1288 men with prostate cancer are likely to die from the disease in 2017. Conclusions This method extends the application of routinely collected population-based data, and can contribute much to the knowledge of the number of men with prostate cancer and their health care requirements. This could be of

  10. Modeling Laser Damage Thresholds Using the Thompson-Gerstman Model

    DTIC Science & Technology

    2014-10-01

    Thompson-Gerstman model considers only photothermal effects. While many granule models assume melanosomes of zero diameter to reduce the complexity of the...high intensity surrounded by a region of zero intensity without the smooth transition seen in the gaussian profile. An annular beam resembles a ring...or donut-shaped beam similar to the top-hat profile with a small region of zero intensity in the center of the beam profile. Examples of thermal

  11. A Threshold Rule Applied to the Retrieval Decision Model

    ERIC Educational Resources Information Center

    Kraft, Donald H.

    1978-01-01

    A threshold rule is analyzed and compared to the Neyman-Pearson procedure, indicating that the threshold rule provides a necessary but not sufficient measure of the minimal performance of a retrieval system, whereas Neyman-Pearson yields a better apriori decision for retrieval. (Author/MBR)

  12. Wavelet detection of weak far-magnetic signal based on adaptive ARMA model threshold

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Lin, Chun-sheng; Fang, Shi

    2009-10-01

    Based on Mallat algorithm, a de-noising algorithm of adaptive wavelet threshold is applied for weak magnetic signal detection of far moving target in complex magnetic environment. The choice of threshold is the key problem. With the spectrum analysis of the magnetic field target, a threshold algorithm on the basis of adaptive ARMA model filter is brought forward to improve the wavelet filtering performance. The simulation of this algorithm on measured data is carried out. Compared to Donoho threshold algorithm, it shows that adaptive ARMA model threshold algorithm significantly improved the capability of weak magnetic signal detection in complex magnetic environment.

  13. Regional differences in population-based cancer survival between six prefectures in Japan: application of relative survival models with funnel plots.

    PubMed

    Ito, Yuri; Ioka, Akiko; Tsukuma, Hideaki; Ajiki, Wakiko; Sugimoto, Tomoyuki; Rachet, Bernard; Coleman, Michel P

    2009-07-01

    We used new methods to examine differences in population-based cancer survival between six prefectures in Japan, after adjustment for age and stage at diagnosis. We applied regression models for relative survival to data from population-based cancer registries covering each prefecture for patients diagnosed with stomach, lung, or breast cancer during 1993-1996. Funnel plots were used to display the excess hazard ratio (EHR) for each prefecture, defined as the excess hazard of death from each cancer within 5 years of diagnosis relative to the mean excess hazard (in excess of national background mortality by age and sex) in all six prefectures combined. The contribution of age and stage to the EHR in each prefecture was assessed from differences in deviance-based R(2) between the various models. No significant differences were seen between prefectures in 5-year survival from breast cancer. For cancers of the stomach and lung, EHR in Osaka prefecture were above the upper 95% control limits. For stomach cancer, the age- and stage-adjusted EHR in Osaka were 1.29 for men and 1.43 for women, compared with Fukui and Yamagata. Differences in the stage at diagnosis of stomach cancer appeared to explain most of this excess hazard (61.3% for men, 56.8% for women), whereas differences in age at diagnosis explained very little (0.8%, 1.3%). This approach offers the potential to quantify the impact of differences in stage at diagnosis on time trends and regional differences in cancer survival. It underlines the utility of population-based cancer registries for improving cancer control.

  14. Effects of mixing in threshold models of social behavior

    NASA Astrophysics Data System (ADS)

    Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  15. Effects of mixing in threshold models of social behavior.

    PubMed

    Akhmetzhanov, Andrei R; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors' behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the "ground state." Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  16. No-Impact Threshold Values for NRAP's Reduced Order Models

    SciTech Connect

    Last, George V.; Murray, Christopher J.; Brown, Christopher F.; Jordan, Preston D.; Sharma, Maneesh

    2013-02-01

    The purpose of this study was to develop methodologies for establishing baseline datasets and statistical protocols for determining statistically significant changes between background concentrations and predicted concentrations that would be used to represent a contamination plume in the Gen II models being developed by NRAP’s Groundwater Protection team. The initial effort examined selected portions of two aquifer systems; the urban shallow-unconfined aquifer system of the Edwards-Trinity Aquifer System (being used to develop the ROM for carbon-rock aquifers, and the a portion of the High Plains Aquifer (an unconsolidated and semi-consolidated sand and gravel aquifer, being used to development the ROM for sandstone aquifers). Threshold values were determined for Cd, Pb, As, pH, and TDS that could be used to identify contamination due to predicted impacts from carbon sequestration storage reservoirs, based on recommendations found in the EPA’s ''Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities'' (US Environmental Protection Agency 2009). Results from this effort can be used to inform a ''no change'' scenario with respect to groundwater impacts, rather than the use of an MCL that could be significantly higher than existing concentrations in the aquifer.

  17. The application of cure models in the presence of competing risks: a tool for improved risk communication in population-based cancer patient survival.

    PubMed

    Eloranta, Sandra; Lambert, Paul C; Andersson, Therese M-L; Björkholm, Magnus; Dickman, Paul W

    2014-09-01

    Quantifying cancer patient survival from the perspective of cure is clinically relevant. However, most cure models estimate cure assuming no competing causes of death. We use a relative survival framework to demonstrate how flexible parametric cure models can be used in combination with competing-risks theory to incorporate noncancer deaths. Under a model that incorporates statistical cure, we present the probabilities that cancer patients (1) have died from their cancer, (2) have died from other causes, (3) will eventually die from their cancer, or (4) will eventually die from other causes, all as a function of time since diagnosis. We further demonstrate how conditional probabilities can be used to update the prognosis among survivors (eg, at 1 or 5 years after diagnosis) by summarizing the proportion of patients who will not die from their cancer. The proposed method is applied to Swedish population-based data for persons diagnosed with melanoma, colon cancer, or acute myeloid leukemia between 1973 and 2007.

  18. Assessing potential population impact of statin treatment for primary prevention of atherosclerotic cardiovascular diseases in the USA: population-based modelling study

    PubMed Central

    Yang, Quanhe; Zhong, Yuna; Gillespie, Cathleen; Merritt, Robert; Bowman, Barbara; George, Mary G; Flanders, W Dana

    2017-01-01

    Objective New cholesterol treatment guidelines from American College of Cardiology/American Heart Association recommend statin treatment for more of US population to prevent atherosclerotic cardiovascular disease (ASCVD). It is important to assess how new guidelines may affect population-level health. This study assessed the impact of statin use for primary prevention of ASCVD under the new guidelines. Methods We used data from 2010 US Multiple Cause Mortality, Third National Health and Nutrition Examination Survey (NHANES III) Linked Mortality File (1988–2006, n=8941) and NHANES 2005–2010 (n=3178) participants 40–75 years of age for the present study. Results Among 33.0 million adults meeting new guidelines for primary prevention of ASCVD, 8.8 million were taking statins; 24.2 million, including 7.7 million with diabetes, are eligible for statin treatment. If all those with diabetes used a statin, 2514 (95% CI 592 to 4142) predicted ASCVD deaths would be prevented annually with 482 (0 to 2239) predicted annual additional cases of myopathy based on randomised clinical trials (RCTs), and 11 801 (9251 to 14 916) using population-based study. Among 16.5 million without diabetes, 5425 (1276 to 8935) ASCVD deaths would be prevented annually with 16 406 (4922 to 26 250) predicted annual additional cases of diabetes and between 1030 (0 to 4791) and 24 302 (19 363 to 30 292) additional cases of myopathy based on RCTs and population-based study. Assuming 80% eligible population take statins with 80% medication adherence, among those without diabetes, the corresponding numbers were 3472 (817 to 5718) deaths, 10 500 (3150 to 16 800) diabetes, 660 (0 to 3066) myopathy (RCTs), and 15 554 (12 392 to 19 387) myopathy (population-based). The estimated total annual cost of statins use ranged from US$1.65 to US$6.5 billion if 100% of eligible population take statins. Conclusions This population-based modelling study focused on impact of statin use on

  19. Laser thresholds in pulp exposure: a rat animal model

    NASA Astrophysics Data System (ADS)

    White, Joel M.; Goodis, Harold E.; Kudler, Joel J.

    1995-05-01

    Laser technology is now being clinically investigated for the removal of carious enamel and dentin. This study used an animal model to evaluate histological pulpal effects from laser exposure. The molars of 24 Sprague-Dawley rats (n equals 264) were exposed to either a pulsed 1.06 micrometers Nd:YAG laser (120 microseconds, 320 micrometer diameter fiber), air rotor drill preparation or left untreated as controls. The following treatment conditions were investigated: control group (n equals 54); high speed drill with carbide bur (n equals 39); laser exposure at 50 mJ/p at 10 Hz (n equals 27), 100 mJ/p at 10 Hz (n equals 66) and 100 mJ/p at 20 Hz (n equals 39). A sixth treatment condition was investigated: root surface hypersensitivity, which included incremental laser exposure from 30 to 100 mJ/p at 10 Hz (n equals 39). The animals were euthanized either immediately after treatment, at one week, or at one month. The jaws were fixed and bioprepared. Remaining dentin thickness was measured, and ranged from 0.17 +/- 0.04 mm to 0.35 +/- 0.09 mm. The pulp tissue was examined for histologic inflammatory response. No evidence of pulpal involvement or adverse pulpal effects were found at any time period in teeth receiving 50 mJ/p. When histologic samples were compared with controls, all observations were similar. Of the 210 exposed teeth, 2 teeth receiving 100 mJ/p demonstrated abscess formation and were exfoliated. Further, in the rat molar when remaining dentin thickness was less than 0.5 mm, exposed to 100 mJ/p, threshold pulpal effects occurred. The response of rat pulp to laser exposure indicated no histologically measurable response to pulsed laser energy at 50 mJ/p.

  20. Validation of three BRCA1/2 mutation-carrier probability models Myriad, BRCAPRO and BOADICEA in a population-based series of 183 German families.

    PubMed

    Schneegans, S M; Rosenberger, A; Engel, U; Sander, M; Emons, G; Shoukier, M

    2012-06-01

    Many studies have evaluated the performance of risk assessment models for BRCA1/2 mutation carrier probabilities in different populations, but to our knowledge very few studies have been conducted in the German population so far. In the recent study, we validated the performance of three risk calculation models by names BRCAPRO, Myriad and BOADICEA in 183 German families who had undergone molecular testing of mutations in BRCA1 and BRCA2 with an indication based on clinical criteria regarding their family history of cancer. The sensitivity and specificity at the conventional threshold of 10% as well as for a threshold of 20% were evaluated. The ability to discriminate between carriers and non-carriers was judged by the area under the receiver operating characteristics curve. We further focused on the performance characteristic of these models in patients carrying large genomic rearrangements as a subtype of mutations which is currently gaining increasing importance. BRCAPRO and BOADICEA performed almost equally well in our patient population, but we found a lack of agreement to Myriad. The results obtained from this study were consistent with previously published results from other population and racial/ethnic groups. We suggest using model specific decision thresholds instead of the recommended universal value of 10%. We further suggest integrating the CaGene5 software package, which includes BRCAPRO and Myriad, in the genetic counselling of German families with suspected inherited breast and ovarian cancer because of the good performance of BRCAPRO and the substantial ease of use of this software.

  1. Nonlinear Dynamic Modeling of Neuron Action Potential Threshold During Synaptically Driven Broadband Intracellular Activity

    PubMed Central

    Roach, Shane M.; Song, Dong; Berger, Theodore W.

    2012-01-01

    Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947

  2. External validation of a COPD prediction model using population-based primary care data: a nested case-control study

    PubMed Central

    Nwaru, Bright I; Simpson, Colin R; Sheikh, Aziz; Kotz, Daniel

    2017-01-01

    Emerging models for predicting risk of chronic obstructive pulmonary disease (COPD) require external validation in order to assess their clinical value. We validated a previous model for predicting new onset COPD in a different database. We randomly drew 38,597 case-control pairs (total N = 77,194) of individuals aged ≥35 years and matched for sex, age, and general practice from the United Kingdom Clinical Practice Research Datalink database. We assessed accuracy of the model to discriminate between COPD cases and non-cases by calculating area under the receiver operator characteristic (ROCAUC) for the prediction scores. Analogous to the development model, ever smoking (OR 6.70; 95%CI 6.41–6.99), prior asthma (OR 6.43; 95%CI 5.85–7.07), and higher socioeconomic deprivation (OR 2.90; 95%CI 2.72–3.09 for highest vs. lowest quintile) increased the risk of COPD. The validated prediction scores ranged from 0–5.71 (ROCAUC 0.66; 95%CI 0.65–0.66) for males and 0–5.95 (ROCAUC 0.71; 95%CI 0.70–0.71) for females. We have confirmed that smoking, prior asthma, and socioeconomic deprivation are key risk factors for new onset COPD. Our model seems externally valid at identifying patients at risk of developing COPD. An impact assessment now needs to be undertaken to assess whether this prediction model can be applied in clinical care settings. PMID:28304375

  3. Associations of iron metabolism genes with blood manganese levels: a population-based study with validation data from animal models

    PubMed Central

    2011-01-01

    Background Given mounting evidence for adverse effects from excess manganese exposure, it is critical to understand host factors, such as genetics, that affect manganese metabolism. Methods Archived blood samples, collected from 332 Mexican women at delivery, were analyzed for manganese. We evaluated associations of manganese with functional variants in three candidate iron metabolism genes: HFE [hemochromatosis], TF [transferrin], and ALAD [δ-aminolevulinic acid dehydratase]. We used a knockout mouse model to parallel our significant results as a novel method of validating the observed associations between genotype and blood manganese in our epidemiologic data. Results Percentage of participants carrying at least one copy of HFE C282Y, HFE H63D, TF P570S, and ALAD K59N variant alleles was 2.4%, 17.7%, 20.1%, and 6.4%, respectively. Percentage carrying at least one copy of either C282Y or H63D allele in HFE gene was 19.6%. Geometric mean (geometric standard deviation) manganese concentrations were 17.0 (1.5) μg/l. Women with any HFE variant allele had 12% lower blood manganese concentrations than women with no variant alleles (β = -0.12 [95% CI = -0.23 to -0.01]). TF and ALAD variants were not significant predictors of blood manganese. In animal models, Hfe-/- mice displayed a significant reduction in blood manganese compared with Hfe+/+ mice, replicating the altered manganese metabolism found in our human research. Conclusions Our study suggests that genetic variants in iron metabolism genes may contribute to variability in manganese exposure by affecting manganese absorption, distribution, or excretion. Genetic background may be critical to consider in studies that rely on environmental manganese measurements. PMID:22074419

  4. Direct analysis of unphased SNP genotype data in population-based association studies via Bayesian partition modelling of haplotypes.

    PubMed

    Morris, Andrew P

    2005-09-01

    We describe a novel method for assessing the strength of disease association with single nucleotide polymorphisms (SNPs) in a candidate gene or small candidate region, and for estimating the corresponding haplotype relative risks of disease, using unphased genotype data directly. We begin by estimating the relative frequencies of haplotypes consistent with observed SNP genotypes. Under the Bayesian partition model, we specify cluster centres from this set of consistent SNP haplotypes. The remaining haplotypes are then assigned to the cluster with the "nearest" centre, where distance is defined in terms of SNP allele matches. Within a logistic regression modelling framework, each haplotype within a cluster is assigned the same disease risk, reducing the number of parameters required. Uncertainty in phase assignment is addressed by considering all possible haplotype configurations consistent with each unphased genotype, weighted in the logistic regression likelihood by their probabilities, calculated according to the estimated relative haplotype frequencies. We develop a Markov chain Monte Carlo algorithm to sample over the space of haplotype clusters and corresponding disease risks, allowing for covariates that might include environmental risk factors or polygenic effects. Application of the algorithm to SNP genotype data in an 890-kb region flanking the CYP2D6 gene illustrates that we can identify clusters of haplotypes with similar risk of poor drug metaboliser (PDM) phenotype, and can distinguish PDM cases carrying different high-risk variants. Further, the results of a detailed simulation study suggest that we can identify positive evidence of association for moderate relative disease risks with a sample of 1,000 cases and 1,000 controls.

  5. Error threshold transition in the random-energy model

    NASA Astrophysics Data System (ADS)

    Campos, Paulo R.

    2002-12-01

    We perform a statistical analysis of the error threshold transition in quasispecies evolution on a random-energy fitness landscape. We obtain a precise description of the genealogical properties of the population through extensive numerical simulations. We find a clear phase transition and can distinguish two regimes of evolution: The first, for low mutation rates, is characterized by strong selection, and the second, for high mutation rates, is characterized by quasineutral evolution.

  6. Approaches in methodology for population-based longitudinal study on neuroprotective model for healthy longevity (TUA) among Malaysian Older Adults.

    PubMed

    Shahar, Suzana; Omar, Azahadi; Vanoh, Divya; Hamid, Tengku Aizan; Mukari, Siti Zamratol Mai-Sarah; Din, Normah Che; Rajab, Nor Fadilah; Mohammed, Zainora; Ibrahim, Rahimah; Loo, Won Hui; Meramat, Asheila; Kamaruddin, Mohd Zul Amin; Bagat, Mohamad Fazdillah; Razali, Rosdinom

    2016-12-01

    A number of longitudinal studies on aging have been designed to determine the predictors of healthy longevity, including the neuroprotective factors, however, relatively few studies included a wide range of factors and highlighted the challenges faced during data collection. Thus, the longitudinal study on neuroprotective model for healthy longevity (LRGS TUA) has been designed to prospectively investigate the magnitude of cognitive decline and its risk factors through a comprehensive multidimensional assessment comprising of biophysical health, auditory and visual function, nutrition and dietary pattern and psychosocial aspects. At baseline, subjects were interviewed for their status on sociodemographic, health, neuropsychological test, psychosocial and dietary intake. Subjects were also measured for anthropometric and physical function and fitness. Biospecimens including blood, buccal swap, hair and toenail were collected, processed and stored. A subsample was assessed for sensory function, i.e., vision and auditory. During follow-up, at 18 and 36 months, most of the measurements, along with morbidity and mortality outcomes will be collected. The description of mild cognitive impairment, successful aging and usual aging process is presented here. A total 2322 respondents were recruited in the data analysis at baseline. Most of the respondents were categorized as experiencing usual aging (73 %), followed by successful aging (11 %) and mild cognitive impairment (16 %). The LRGS TUA study is the most comprehensive longitudinal study on aging in Malaysia, and will contribute to the understanding of the aging process and factors associated with healthy aging and mental well-being of a multiethnic population in Malaysia.

  7. Budget Impact Analysis of Switching to Digital Mammography in a Population-Based Breast Cancer Screening Program: A Discrete Event Simulation Model

    PubMed Central

    Comas, Mercè; Arrospide, Arantzazu; Mar, Javier; Sala, Maria; Vilaprinyó, Ester; Hernández, Cristina; Cots, Francesc; Martínez, Juan; Castells, Xavier

    2014-01-01

    Objective To assess the budgetary impact of switching from screen-film mammography to full-field digital mammography in a population-based breast cancer screening program. Methods A discrete-event simulation model was built to reproduce the breast cancer screening process (biennial mammographic screening of women aged 50 to 69 years) combined with the natural history of breast cancer. The simulation started with 100,000 women and, during a 20-year simulation horizon, new women were dynamically entered according to the aging of the Spanish population. Data on screening were obtained from Spanish breast cancer screening programs. Data on the natural history of breast cancer were based on US data adapted to our population. A budget impact analysis comparing digital with screen-film screening mammography was performed in a sample of 2,000 simulation runs. A sensitivity analysis was performed for crucial screening-related parameters. Distinct scenarios for recall and detection rates were compared. Results Statistically significant savings were found for overall costs, treatment costs and the costs of additional tests in the long term. The overall cost saving was 1,115,857€ (95%CI from 932,147 to 1,299,567) in the 10th year and 2,866,124€ (95%CI from 2,492,610 to 3,239,638) in the 20th year, representing 4.5% and 8.1% of the overall cost associated with screen-film mammography. The sensitivity analysis showed net savings in the long term. Conclusions Switching to digital mammography in a population-based breast cancer screening program saves long-term budget expense, in addition to providing technical advantages. Our results were consistent across distinct scenarios representing the different results obtained in European breast cancer screening programs. PMID:24832200

  8. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  9. Modelling the regulatory system for diabetes mellitus with a threshold window

    NASA Astrophysics Data System (ADS)

    Yang, Jin; Tang, Sanyi; Cheke, Robert A.

    2015-05-01

    Piecewise (or non-smooth) glucose-insulin models with threshold windows for type 1 and type 2 diabetes mellitus are proposed and analyzed with a view to improving understanding of the glucose-insulin regulatory system. For glucose-insulin models with a single threshold, the existence and stability of regular, virtual, pseudo-equilibria and tangent points are addressed. Then the relations between regular equilibria and a pseudo-equilibrium are studied. Furthermore, the sufficient and necessary conditions for the global stability of regular equilibria and the pseudo-equilibrium are provided by using qualitative analysis techniques of non-smooth Filippov dynamic systems. Sliding bifurcations related to boundary node bifurcations were investigated with theoretical and numerical techniques, and insulin clinical therapies are discussed. For glucose-insulin models with a threshold window, the effects of glucose thresholds or the widths of threshold windows on the durations of insulin therapy and glucose infusion were addressed. The duration of the effects of an insulin injection is sensitive to the variation of thresholds. Our results indicate that blood glucose level can be maintained within a normal range using piecewise glucose-insulin models with a single threshold or a threshold window. Moreover, our findings suggest that it is critical to individualise insulin therapy for each patient separately, based on initial blood glucose levels.

  10. Reversed thresholds in partial credit models: a reason for collapsing categories?

    PubMed

    Wetzel, Eunike; Carstensen, Claus H

    2014-12-01

    When questionnaire data with an ordered polytomous response format are analyzed in the framework of item response theory using the partial credit model or the generalized partial credit model, reversed thresholds may occur. This led to the discussion of whether reversed thresholds violate model assumptions and indicate disordering of the response categories. Adams, Wu, and Wilson showed that reversed thresholds are merely a consequence of low frequencies in the categories concerned and that they do not affect the order of the rating scale. This article applies an empirical approach to elucidate the topic of reversed thresholds using data from the Revised NEO Personality Inventory as well as a simulation study. It is shown that categories differentiate between participants with different trait levels despite reversed thresholds and that category disordering can be analyzed independently of the ordering of the thresholds. Furthermore, we show that reversed thresholds often only occur in subgroups of participants. Thus, researchers should think more carefully about collapsing categories due to reversed thresholds.

  11. A phenomenological model on the kink mode threshold varying with the inclination of sheath boundary

    SciTech Connect

    Sun, X.; Intrator, T. P.; Sears, J.; Weber, T.; Liu, M.

    2013-11-15

    In nature and many laboratory plasmas, a magnetic flux tube threaded by current or a flux rope has a footpoint at a boundary. The current driven kink mode is one of the fundamental ideal magnetohydrodynamic instabilities in plasmas. It has an instability threshold that has been found to strongly depend on boundary conditions (BCs). We provide a theoretical model to explain the transition of this threshold dependence between nonline tied and line tied boundary conditions. We evaluate model parameters using experimentally measured plasma data, explicitly verify several kink eigenfunctions, and validate the model predictions for boundary conditions BCs that span the range between NLT and LT BCs. Based on this model, one could estimate the kink threshold given knowledge of the displacement of a flux rope end, or conversely estimate flux rope end motion based on knowledge of it kink stability threshold.

  12. Two-threshold model for scaling laws of noninteracting snow avalanches

    USGS Publications Warehouse

    Faillettaz, J.; Louchet, F.; Grasso, J.-R.

    2004-01-01

    A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.

  13. The threshold of a stochastic delayed SIR epidemic model with vaccination

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Jiang, Daqing

    2016-11-01

    In this paper, we study the threshold dynamics of a stochastic delayed SIR epidemic model with vaccination. We obtain sufficient conditions for extinction and persistence in the mean of the epidemic. The threshold between persistence in the mean and extinction of the stochastic system is also obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number Rbar0 of the deterministic system. Results show that time delay has important effects on the persistence and extinction of the epidemic.

  14. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover

  15. The threshold of a stochastic delayed SIR epidemic model with temporary immunity

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Chen, Qingmei; Jiang, Daqing

    2016-05-01

    This paper is concerned with the asymptotic properties of a stochastic delayed SIR epidemic model with temporary immunity. Sufficient conditions for extinction and persistence in the mean of the epidemic are established. The threshold between persistence in the mean and extinction of the epidemic is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.

  16. The Rasch Rating Model and the Disordered Threshold Controversy

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark

    2012-01-01

    The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…

  17. The relationship between the Five-Factor Model personality traits and peptic ulcer disease in a large population-based adult sample.

    PubMed

    Realo, Anu; Teras, Andero; Kööts-Ausmees, Liisi; Esko, Tõnu; Metspalu, Andres; Allik, Jüri

    2015-12-01

    The current study examined the relationship between the Five-Factor Model personality traits and physician-confirmed peptic ulcer disease (PUD) diagnosis in a large population-based adult sample, controlling for the relevant behavioral and sociodemographic factors. Personality traits were assessed by participants themselves and by knowledgeable informants using the NEO Personality Inventory-3 (NEO PI-3). When controlling for age, sex, education, and cigarette smoking, only one of the five NEO PI-3 domain scales - higher Neuroticism - and two facet scales - lower A1: Trust and higher C1: Competence - made a small, yet significant contribution (p < 0.01) to predicting PUD in logistic regression analyses. In the light of these relatively modest associations, our findings imply that it is certain behavior (such as smoking) and sociodemographic variables (such as age, gender, and education) rather than personality traits that are associated with the diagnosis of PUD at a particular point in time. Further prospective studies with a longitudinal design and multiple assessments would be needed to fully understand if the FFM personality traits serve as risk factors for the development of PUD.

  18. Natural History of Dependency in the Elderly: A 24-Year Population-Based Study Using a Longitudinal Item Response Theory Model.

    PubMed

    Edjolo, Arlette; Proust-Lima, Cécile; Delva, Fleur; Dartigues, Jean-François; Pérès, Karine

    2016-02-15

    We aimed to describe the hierarchical structure of Instrumental Activities of Daily Living (IADL) and basic Activities of Daily Living (ADL) and trajectories of dependency before death in an elderly population using item response theory methodology. Data were obtained from a population-based French cohort study, the Personnes Agées QUID (PAQUID) Study, of persons aged ≥65 years at baseline in 1988 who were recruited from 75 randomly selected areas in Gironde and Dordogne. We evaluated IADL and ADL data collected at home every 2-3 years over a 24-year period (1988-2012) for 3,238 deceased participants (43.9% men). We used a longitudinal item response theory model to investigate the item sequence of 11 IADL and ADL combined into a single scale and functional trajectories adjusted for education, sex, and age at death. The findings confirmed the earliest losses in IADL (shopping, transporting, finances) at the partial limitation level, and then an overlapping of concomitant IADL and ADL, with bathing and dressing being the earliest ADL losses, and finally total losses for toileting, continence, eating, and transferring. Functional trajectories were sex-specific, with a benefit of high education that persisted until death in men but was only transient in women. An in-depth understanding of this sequence provides an early warning of functional decline for better adaptation of medical and social care in the elderly.

  19. Threshold voltage roll-off modelling of bilayer graphene field-effect transistors

    NASA Astrophysics Data System (ADS)

    Saeidmanesh, M.; Ismail, Razali; Khaledian, M.; Karimi, H.; Akbari, E.

    2013-12-01

    An analytical model is presented for threshold voltage roll-off of double gate bilayer graphene field-effect transistors. To this end, threshold voltage models of short- and long-channel states have been developed. In the short-channel case, front and back gate potential distributions have been modelled and used. In addition, the tunnelling probability is modelled and its effect is taken into consideration in the potential distribution model. To evaluate the accuracy of the potential model, FlexPDE software is employed with proper boundary conditions and a good agreement is observed. Using the proposed models, the effect of several structural parameters on the threshold voltage and its roll-off are studied at room temperature.

  20. Product versus additive threshold models for analysis of reproduction outcomes in animal genetics.

    PubMed

    David, I; Bodin, L; Gianola, D; Legarra, A; Manfredi, E; Robert-Granié, C

    2009-08-01

    The phenotypic observation of some reproduction traits (e.g., insemination success, interval from lambing to insemination) is the result of environmental and genetic factors acting on 2 individuals: the male and female involved in a mating couple. In animal genetics, the main approach (called additive model) proposed for studying such traits assumes that the phenotype is linked to a purely additive combination, either on the observed scale for continuous traits or on some underlying scale for discrete traits, of environmental and genetic effects affecting the 2 individuals. Statistical models proposed for studying human fecundability generally consider reproduction outcomes as the product of hypothetical unobservable variables. Taking inspiration from these works, we propose a model (product threshold model) for studying a binary reproduction trait that supposes that the observed phenotype is the product of 2 unobserved phenotypes, 1 for each individual. We developed a Gibbs sampling algorithm for fitting a Bayesian product threshold model including additive genetic effects and showed by simulation that it is feasible and that it provides good estimates of the parameters. We showed that fitting an additive threshold model to data that are simulated under a product threshold model provides biased estimates, especially for individuals with high breeding values. A main advantage of the product threshold model is that, in contrast to the additive model, it provides distinct estimates of fixed effects affecting each of the 2 unobserved phenotypes.

  1. The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide

    PubMed Central

    Folly, Walter Sydney Dutra

    2011-01-01

    Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431

  2. Investigating the genetic architecture of conditional strategies using the environmental threshold model

    PubMed Central

    Hazel, Wade N.; Tomkins, Joseph L.

    2015-01-01

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a ‘half-sib common environment’ and a ‘family-level split environment’ experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic ‘proximate’ cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions. PMID:26674955

  3. Investigating the genetic architecture of conditional strategies using the environmental threshold model.

    PubMed

    Buzatto, Bruno A; Buoro, Mathieu; Hazel, Wade N; Tomkins, Joseph L

    2015-12-22

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a 'half-sib common environment' and a 'family-level split environment' experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic 'proximate' cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions.

  4. A physics-based model of threshold voltage for amorphous oxide semiconductor thin-film transistors

    NASA Astrophysics Data System (ADS)

    Chen, Chi-Le; Chen, Wei-Feng; Zhou, Lei; Wu, Wei-Jing; Xu, Miao; Wang, Lei; Peng, Jun-Biao

    2016-03-01

    In the application of the Lambert W function, the surface potential for amorphous oxide semiconductor thin-film transistors (AOS TFTs) under the subthreshold region is approximated by an asymptotic equation only considering the tail states. While the surface potential under the above-threshold region is approximated by another asymptotic equation only considering the free carriers. The intersection point between these two asymptotic equations represents the transition from the weak accumulation to the strong accumulation. Therefore, the gate voltage corresponding to the intersection point is defined as threshold voltage of AOS TFTs. As a result, an analytical expression for the threshold voltage is derived from this novel definition. It is shown that the threshold voltage achieved by the proposed physics-based model is agreeable with that extracted by the conventional linear extrapolation method. Furthermore, we find that the free charge per unit area in the channel starts increasing sharply from the threshold voltage point, where the concentration of the free carriers is a little larger than that of the localized carriers. The proposed model for the threshold voltage of AOS TFTs is not only physically meaningful but also mathematically convenient, so it is expected to be useful for characterizing and modeling AOS TFTs.

  5. Evaluation of the FRAX model for hip fracture predictions in the population-based Kuopio Osteoporosis Risk Factor and Prevention Study (OSTPRE).

    PubMed

    Sund, Reijo; Honkanen, Risto; Johansson, Helena; Odén, Anders; McCloskey, Eugene; Kanis, John; Kröger, Heikki

    2014-07-01

    Calibration of the Finnish FRAX model was evaluated using a locally derived population-based cohort of postmenopausal women (n = 13,917). Hip fractures were observed from national register-based data and verified from radiological records. For a subpopulation of 11,182 women, there were enough data to calculate the fracture probabilities using the Finnish FRAX tool (without bone mineral density). A 10-year period prevalence of hip fractures to this subpopulation was 0.66 %. The expected numbers of hip fractures were significantly higher than the self reported ones (O/E ratio 0.46; 95 % CI 0.33-0.63), had a tendency to be greater than the observed ones (O/E ratio 0.83; 95 % CI 0.65-1.04), and calibration in terms of goodness-of-fit of absolute probabilities was questionable (P = 0.015). Strikingly, the 10-year period prevalence of hip fractures to the whole cohort was higher (0.84 %) than for the women with FRAX measurements (0.66 %). This was mainly the result of difference between people who had and who had not responded to postal enquiries (0.71 vs. 1.77 %, P < 0.0001). Self-reports missed to capture 38 % of all hip fractures in those who responded and about 45 % of hip fractures in women who had a FRAX estimate. The Finnish FRAX tool seems to provide appropriate discrimination for hip fracture risk, but caution is required in the interpretation of absolute risk, especially if used for population that may not be representing general population per se. Our study also showed that patients with no response had significantly higher hip fracture risk and that the use of purely self-reported hip fractures in calculations results in biased incidence and period prevalence estimates. Such important biases may remain unnoticed if there are no data from other sources available.

  6. Medication Adherence Patterns after Hospitalization for Coronary Heart Disease. A Population-Based Study Using Electronic Records and Group-Based Trajectory Models

    PubMed Central

    Librero, Julián; Sanfélix-Gimeno, Gabriel; Peiró, Salvador

    2016-01-01

    Objective To identify adherence patterns over time and their predictors for evidence-based medications used after hospitalization for coronary heart disease (CHD). Patients and Methods We built a population-based retrospective cohort of all patients discharged after hospitalization for CHD from public hospitals in the Valencia region (Spain) during 2008 (n = 7462). From this initial cohort, we created 4 subcohorts with at least one prescription (filled or not) from each therapeutic group (antiplatelet, beta-blockers, ACEI/ARB, statins) within the first 3 months after discharge. Monthly adherence was defined as having ≥24 days covered out of 30, leading to a repeated binary outcome measure. We assessed the membership to trajectory groups of adherence using group-based trajectory models. We also analyzed predictors of the different adherence patterns using multinomial logistic regression. Results We identified a maximum of 5 different adherence patterns: 1) Nearly-always adherent patients; 2) An early gap in adherence with a later recovery; 3) Brief gaps in medication use or occasional users; 4) A slow decline in adherence; and 5) A fast decline. These patterns represented variable proportions of patients, the descending trajectories being more frequent for the beta-blocker and ACEI/ARB cohorts (16% and 17%, respectively) than the antiplatelet and statin cohorts (10% and 8%, respectively). Predictors of poor or intermediate adherence patterns were having a main diagnosis of unstable angina or other forms of CHD vs. AMI in the index hospitalization, being born outside Spain, requiring copayment or being older. Conclusion Distinct adherence patterns over time and their predictors were identified. This may be a useful approach for targeting improvement interventions in patients with poor adherence patterns. PMID:27551748

  7. Postscript: Parallel Distributed Processing in Localist Models without Thresholds

    ERIC Educational Resources Information Center

    Plaut, David C.; McClelland, James L.

    2010-01-01

    The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…

  8. Does Imaging Technology Cause Cancer? Debunking the Linear No-Threshold Model of Radiation Carcinogenesis.

    PubMed

    Siegel, Jeffry A; Welsh, James S

    2016-04-01

    In the past several years, there has been a great deal of attention from the popular media focusing on the alleged carcinogenicity of low-dose radiation exposures received by patients undergoing medical imaging studies such as X-rays, computed tomography scans, and nuclear medicine scintigraphy. The media has based its reporting on the plethora of articles published in the scientific literature that claim that there is "no safe dose" of ionizing radiation, while essentially ignoring all the literature demonstrating the opposite point of view. But this reported "scientific" literature in turn bases its estimates of cancer induction on the linear no-threshold hypothesis of radiation carcinogenesis. The use of the linear no-threshold model has yielded hundreds of articles, all of which predict a definite carcinogenic effect of any dose of radiation, regardless of how small. Therefore, hospitals and professional societies have begun campaigns and policies aiming to reduce the use of certain medical imaging studies based on perceived risk:benefit ratio assumptions. However, as they are essentially all based on the linear no-threshold model of radiation carcinogenesis, the risk:benefit ratio models used to calculate the hazards of radiological imaging studies may be grossly inaccurate if the linear no-threshold hypothesis is wrong. Here, we review the myriad inadequacies of the linear no-threshold model and cast doubt on the various studies based on this overly simplistic model.

  9. A profile-aware resist model with variable threshold

    NASA Astrophysics Data System (ADS)

    Moulis, Sylvain; Farys, Vincent; Belledent, Jérôme; Thérèse, Romain; Lan, Song; Zhao, Qian; Feng, Mu; Depre, Laurent; Dover, Russell

    2012-11-01

    The pursuit of ever smaller transistors has pushed technological innovations in the field of lithography. In order to continue following the path of Moore's law, several solutions have been proposed: EUV, e-beam and double patterning lithography. As EUV and e-beam lithography are still not ready for mass production for 20 nm and 14 nm nodes, double patterning lithography play an important role for these nodes. In this work, we focus on a Self-Aligned Double-Patterning process (SADP) which consists of depositing a spacer material on each side of a mandrel exposed during a first lithography step, dividing the pitch into two, after being transferred into the substrate, and then cutting the unwanted patterns through a second lithography exposure. In the specific case where spacers are deposited directly on the flanks of the resist, it is crucial to control its profile as it could induce final CD errors or even spacer collapse. One possibility to prevent these defects from occurring is to predict the profile of the resist at the OPc verification stage. For that, we need an empirical resist model that is able to predict such behaviour. This work is a study of a profile-aware resist model that is calibrated using both atomic force microscopy (AFM) and scanning electron microscopy (SEM) data, both taken using a focus and exposure matrix (FEM).

  10. A threshold-based weather model for predicting stripe rust infection in winter wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Wheat stripe rust (WSR) (caused by Puccinia striiformis sp. tritici) is a major threat in most wheat growing regions worldwide, with potential to inflict regular yield losses when environmental conditions are favorable. We propose a threshold-based disease-forecasting model using a stepwise modeling...

  11. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.

    PubMed

    Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A

    2010-10-11

    We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.

  12. Study on the threshold of a stochastic SIR epidemic model and its extensions

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli

    2016-09-01

    This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.

  13. Modeling Soil Quality Thresholds to Ecosystem Recovery at Fort Benning, Georgia, USA

    SciTech Connect

    Garten Jr., C.T.

    2004-03-08

    The objective of this research was to use a simple model of soil C and N dynamics to predict nutrient thresholds to ecosystem recovery on degraded soils at Fort Benning, Georgia, in the southeastern USA. The model calculates aboveground and belowground biomass, soil C inputs and dynamics, soil N stocks and availability, and plant N requirements. A threshold is crossed when predicted soil N supplies fall short of predicted N required to sustain biomass accrual at a specified recovery rate. Four factors were important to development of thresholds to recovery: (1) initial amounts of aboveground biomass, (2) initial soil C stocks (i.e., soil quality), (3) relative recovery rates of biomass, and (4) soil sand content. Thresholds to ecosystem recovery predicted by the model should not be interpreted independent of a specified recovery rate. Initial soil C stocks influenced the predicted patterns of recovery by both old field and forest ecosystems. Forests and old fields on soils with varying sand content had different predicted thresholds to recovery. Soil C stocks at barren sites on Fort Benning generally lie below predicted thresholds to 100% recovery of desired future ecosystem conditions defined on the basis of aboveground biomass (18000 versus 360 g m{sup -2} for forests and old fields, respectively). Calculations with the model indicated that reestablishment of vegetation on barren sites to a level below the desired future condition is possible at recovery rates used in the model, but the time to 100% recovery of desired future conditions, without crossing a nutrient threshold, is prolonged by a reduced rate of forest growth. Predicted thresholds to ecosystem recovery were less on soils with more than 70% sand content. The lower thresholds for old field and forest recovery on more sandy soils are apparently due to higher relative rates of net soil N mineralization in more sandy soils. Calculations with the model indicate that a combination of desired future

  14. Modeling of ablation threshold dependence on pulse duration for dielectrics with ultrashort pulsed laser

    NASA Astrophysics Data System (ADS)

    Sun, Mingying; Zhu, Jianqiang; Lin, Zunqi

    2017-01-01

    We present a numerical model of plasma formation in ultrafast laser ablation on the dielectrics surface. Ablation threshold dependence on pulse duration is predicted with the model and the numerical results for water agrees well with the experimental data for pulse duration from 140 fs to 10 ps. Influences of parameters and approximations of photo- and avalanche-ionization on the ablation threshold prediction are analyzed in detail for various pulse lengths. The calculated ablation threshold is strongly dependent on electron collision time for all the pulse durations. The complete photoionization model is preferred for pulses shorter than 1 ps rather than the multiphoton ionization approximations. The transition time of inverse bremsstrahlung absorption needs to be considered when pulses are shorter than 5 ps and it can also ensure the avalanche ionization (AI) coefficient consistent with that in multiple rate equations (MREs) for pulses shorter than 300 fs. The threshold electron density for AI is only crucial for longer pulses. It is reasonable to ignore the recombination loss for pulses shorter than 100 fs. In addition to thermal transport and hydrodynamics, neglecting the threshold density for AI and recombination could also contribute to the disagreements between the numerical and the experimental results for longer pulses.

  15. Cost-Effectiveness of Orthogeriatric and Fracture Liaison Service Models of Care for Hip Fracture Patients: A Population-Based Study.

    PubMed

    Leal, Jose; Gray, Alastair M; Hawley, Samuel; Prieto-Alhambra, Daniel; Delmestri, Antonella; Arden, Nigel K; Cooper, Cyrus; Javaid, M Kassim; Judge, Andrew

    2017-02-01

    Fracture liaison services are recommended as a model of best practice for organizing patient care and secondary fracture prevention for hip fracture patients, although variation exists in how such services are structured. There is considerable uncertainty as to which model is most cost-effective and should therefore be mandated. This study evaluated the cost- effectiveness of orthogeriatric (OG)- and nurse-led fracture liaison service (FLS) models of post-hip fracture care compared with usual care. Analyses were conducted from a health care and personal social services payer perspective, using a Markov model to estimate the lifetime impact of the models of care. The base-case population consisted of men and women aged 83 years with a hip fracture. The risk and costs of hip and non-hip fractures were derived from large primary and hospital care data sets in the UK. Utilities were informed by a meta-regression of 32 studies. In the base-case analysis, the orthogeriatric-led service was the most effective and cost-effective model of care at a threshold of £30,000 per quality-adjusted life years gained (QALY). For women aged 83 years, the OG-led service was the most cost-effective at £22,709/QALY. If only health care costs are considered, OG-led service was cost-effective at £12,860/QALY and £14,525/QALY for women and men aged 83 years, respectively. Irrespective of how patients were stratified in terms of their age, sex, and Charlson comorbidity score at index hip fracture, our results suggest that introducing an orthogeriatrician-led or a nurse-led FLS is cost-effective when compared with usual care. Although considerable uncertainty remains concerning which of the models of care should be preferred, introducing an orthogeriatrician-led service seems to be the most cost-effective service to pursue. © 2016 American Society for Bone and Mineral Research.

  16. Effect of otologic drill noise on ABR thresholds in a guinea pig model.

    PubMed

    Suits, G W; Brummett, R E; Nunley, J

    1993-10-01

    The noise generated by the otologic drill has been implicated as a cause of sensorineural hearing loss after ear surgery. However, clinical studies on this subject are contradictory and difficult to interpret. Therefore a guinea pig model was used to study whether the level of noise generated by the otologic drill can cause threshold shifts in the auditory brainstem response (ABR). The source noise was a recording obtained during a human cadaver mastoidectomy using a microphone and an accelerometer. Ten female Topeka-strain guinea pigs were exposed to the recorded drill noise for a period of 55 minutes. Exposure included both air-conducted energy from a speaker and bone-conducted energy from a bone vibrator applied directly to the skull. ABR threshold measurements were taken pre-exposure (baseline), immediately after exposure, and at weekly intervals thereafter for 3 weeks. Three control animals were subjected to the same procedure without the sound exposure. A significant threshold shift (p < 0.0001) was seen for each frequency tested (2, 4, 8, 16, 20, and 32 kHz) immediately after exposure to noise in all experimental animals. Thresholds returned to baseline within 3 weeks. We conclude that the level of noise generated by the otologic drill in mastoid surgery can cause a temporary threshold shift in this guinea pig model.

  17. Thresholds in vegetation responses to drought: Implications for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    Tague, C.; Dugger, A. L.

    2011-12-01

    While threshold behavior is often associated with soil and subsurface runoff generation, dynamic vegetation responses to water stress may be an important contributor to threshold type behavior in rainfall runoff models. Vegetation water loss varies with vegetation type and biomass and transpiration dynamics in many settings are regulated by stomatal function. In water limited environments the timing and frequency of stomatal closure varies from year to year as a function of water stress. Stomatal closure and associated fine time scale (hourly to weekly) plant transpiration may appear as threshold (on/off) behavior. Total seasonal to annual plant water use, however, typically show a continuous relationship with atmospheric conditions and soil moisture. Thus while short-time scale behavior may demonstrate non-linear, threshold type behavior, continuous relationships at slightly longer time scales can be used to capture the role of vegetation mediated water loss and its associated impact on storage and runoff. Many rainfall runoff models rely on these types of relationships. However these relationships may change if water stress influences vegetation structure as it does in drought conditions. Forest dieback under drought is a dramatic example of a threshold event, and one that is expected to occur with increasing frequency under a warmer climate. Less dramatic but still important are changes in leaf and root biomass in response to drought. We demonstrate these effects using a coupled ecosystem carbon cycling and hydrology model and show that by accounting for drought driven changes in vegetation dynamics we improve our ability to capture inter-annual variation in streamflow for a semi-arid watershed in New Mexico. We also use the model to predict spatial patterns of more catastrophic vegetation dieback with moisture stress and show that we can accurately capture the spatial pattern of ponderosa pine dieback during a early 2000s drought in New Mexico. We use these

  18. Analysis and modeling of zero-threshold voltage native devices with industry standard BSIM6 model

    NASA Astrophysics Data System (ADS)

    Gupta, Chetan; Agarwal, Harshit; Lin, Y. K.; Ito, Akira; Hu, Chenming; Singh Chauhan, Yogesh

    2017-04-01

    In this paper, we present the modeling of zero-threshold voltage (V TH) bulk MOSFET, also called native devices, using enhanced BSIM6 model. Devices under study show abnormally high leakage current in weak inversion, leading to degraded subthreshold slope. The reasons for such abnormal behavior are identified using technology computer-aided design (TCAD) simulations. Since the zero-V TH transistors have quite low doping, the depletion layer from drain may extend upto the source (at some non-zero value of V DS) which leads to punch-through phenomenon. This source–drain leakage current adds with the main channel current, causing the unexpected current characteristics in these devices. TCAD simulations show that, as we increase the channel length (L eff) and channel doping (N SUB), the source–drain leakage due to punch-through decreases. We propose a model to capture the source–drain leakage in these devices. The model incorporates gate, drain, body biases and channel length as well as channel doping dependency too. The proposed model is validated with the measured data of production level device over various conditions of biases and channel lengths.

  19. The threshold of a stochastic SIVS epidemic model with nonlinear saturated incidence

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli; Zhang, Tiansi; Yuan, Sanling

    2016-02-01

    A stochastic version of the SIS epidemic model with vaccination (SIVS) is studied. When the noise is small, the threshold parameter is identified, which determines the extinction and persistence of the epidemic. Besides, the results show that large noise will suppress the epidemic from prevailing regardless of the saturated incidence. The results are illustrated by computer simulations.

  20. The relation between a microscopic threshold-force model and macroscopic models of adhesion

    NASA Astrophysics Data System (ADS)

    Hulikal, Srivatsan; Bhattacharya, Kaushik; Lapusta, Nadia

    2017-01-01

    This paper continues our recent work on the relationship between discrete contact interactions at the microscopic scale and continuum contact interactions at the macroscopic scale (Hulikal et al., J. Mech. Phys. Solids 76, 144-161, 2015). The focus of this work is on adhesion. We show that a collection of a large number of discrete elements governed by a threshold-force based model at the microscopic scale collectively gives rise to continuum fracture mechanics at the macroscopic scale. A key step is the introduction of an efficient numerical method that enables the computation of a large number of discrete contacts. Finally, while this work focuses on scaling laws, the methodology introduced in this paper can also be used to study rough-surface adhesion.

  1. Determination of validation threshold for coordinate measuring methods using a metrological compatibility model

    NASA Astrophysics Data System (ADS)

    Gromczak, Kamila; Gąska, Adam; Kowalski, Marek; Ostrowska, Ksenia; Sładek, Jerzy; Gruza, Maciej; Gąska, Piotr

    2017-01-01

    The following paper presents a practical approach to the validation process of coordinate measuring methods at an accredited laboratory, using a statistical model of metrological compatibility. The statistical analysis of measurement results obtained using a highly accurate system was intended to determine the permissible validation threshold values. The threshold value constitutes the primary criterion for the acceptance or rejection of the validated method, and depends on both the differences between measurement results with corresponding uncertainties and the individual correlation coefficient. The article specifies and explains the types of measuring methods that were subject to validation and defines the criterion value governing their acceptance or rejection in the validation process.

  2. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    DOE PAGES

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; ...

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  3. Local Bifurcations and Optimal Theory in a Delayed Predator-Prey Model with Threshold Prey Harvesting

    NASA Astrophysics Data System (ADS)

    Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel

    2015-06-01

    We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.

  4. Using a combined population-based and kinetic modelling approach to assess timescales and durations of magma migration activities prior to the 1669 flank eruption of Mt. Etna

    NASA Astrophysics Data System (ADS)

    Kahl, M.; Morgan, D. J.; Viccaro, M.; Dingwell, D. B.

    2015-12-01

    The March-July eruption of Mt. Etna in 1669 is ranked as one of the most destructive and voluminous eruptions of Etna volcano in historical times. To assess threats from future eruptions, a better understanding of how and over what timescales magma moved underground prior to and during the 1669 eruption is required. We present a combined population based and kinetic modelling approach [1-2] applied to 185 olivine crystals that erupted during the 1669 eruption. By means of this approach we provide, for the first time, a dynamic picture of magma mixing and magma migration activity prior to and during the 1669 flank eruption of Etna volcano. Following the work of [3] we have studied 10 basaltic lava samples (five SET1 and five SET2 samples) that were erupted from different fissures that opened between 950 and 700 m a.s.l. Following previous work [1-2] we were able to classify different populations of olivine based on their overall core and rim compositional record and the prevalent zoning type (i.e. normal vs. reverse). The core plateau compositions of the SET1 and SET2 olivines range from Fo70 up to Fo83 with a single peak at Fo75-76. The rims differ significantly and can be distinguished into two different groups. Olivine rims from the SET1 samples are generally more evolved and range from Fo50 to Fo64 with a maximum at Fo55-57. SET2 olivine rims vary between Fo65-75 with a peak at Fo69. SET1 and SET2 olivines display normal zonation with cores at Fo75-76 and diverging rim records (Fo55-57 and Fo65-75). The diverging core and rim compositions recorded in the SET1 and SET2 olivines can be attributed to magma evolution possibly in three different magmatic environments (MEs): M1 (=Fo75-76), M2 (=Fo69) and M3 (=Fo55-57) with magma transfer and mixing amongst them. The MEs established in this study differ slightly from those identified in previous works [1-2]. We note the relative lack of olivines with Fo-rich core and rim compositions indicating a major mafic magma

  5. Predicting the epidemic threshold of the susceptible-infected-recovered model

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Liu, Quan-Hui; Zhong, Lin-Feng; Tang, Ming; Gao, Hui; Stanley, H. Eugene

    2016-04-01

    Researchers have developed several theoretical methods for predicting epidemic thresholds, including the mean-field like (MFL) method, the quenched mean-field (QMF) method, and the dynamical message passing (DMP) method. When these methods are applied to predict epidemic threshold they often produce differing results and their relative levels of accuracy are still unknown. We systematically analyze these two issues—relationships among differing results and levels of accuracy—by studying the susceptible-infected-recovered (SIR) model on uncorrelated configuration networks and a group of 56 real-world networks. In uncorrelated configuration networks the MFL and DMP methods yield identical predictions that are larger and more accurate than the prediction generated by the QMF method. As for the 56 real-world networks, the epidemic threshold obtained by the DMP method is more likely to reach the accurate epidemic threshold because it incorporates full network topology information and some dynamical correlations. We find that in most of the networks with positive degree-degree correlations, an eigenvector localized on the high k-core nodes, or a high level of clustering, the epidemic threshold predicted by the MFL method, which uses the degree distribution as the only input information, performs better than the other two methods.

  6. Video object segmentation via adaptive threshold based on background model diversity

    NASA Astrophysics Data System (ADS)

    Boubekeur, Mohamed Bachir; Luo, SenLin; Labidi, Hocine; Benlefki, Tarek

    2015-03-01

    The background subtraction could be presented as classification process when investigating the upcoming frames in a video stream, taking in consideration in some cases: a temporal information, in other cases the spatial consistency, and these past years both of the considerations above. The classification often relied in most of the cases on a fixed threshold value. In this paper, a framework for background subtraction and moving object detection based on adaptive threshold measure and short/long frame differencing procedure is proposed. The presented framework explored the case of adaptive threshold using mean squared differences for a sampled background model. In addition, an intuitive update policy which is neither conservative nor blind is presented. The algorithm succeeded on extracting the moving foreground and isolating an accurate background.

  7. Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection

    NASA Astrophysics Data System (ADS)

    Naveau, Philippe; Huser, Raphael; Ribereau, Pierre; Hannart, Alexis

    2016-04-01

    In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.

  8. Detecting Departure From Additivity Along a Fixed-Ratio Mixture Ray With a Piecewise Model for Dose and Interaction Thresholds

    PubMed Central

    Gennings, Chris; Wagner, Elizabeth D.; Simmons, Jane Ellen; Plewa, Michael J.

    2010-01-01

    For mixtures of many chemicals, a ray design based on a relevant, fixed mixing ratio is useful for detecting departure from additivity. Methods for detecting departure involve modeling the response as a function of total dose along the ray. For mixtures with many components, the interaction may be dose dependent. Therefore, we have developed the use of a three-segment model containing both a dose threshold and an interaction threshold. Prior to the dose threshold, the response is that of background; between the dose threshold and the interaction threshold, an additive relationship exists; the model allows for departure from additivity beyond the interaction threshold. With such a model, we can conduct a hypothesis test of additivity, as well as a test for a region of additivity. The methods are illustrated with cytotoxicity data that arise when Chinese hamster ovary cells are exposed to a mixture of nine haloacetic acids. PMID:21359103

  9. Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.

    PubMed

    Musso, Fabio; Feverati, Giovanni

    2012-01-01

    We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution.

  10. Threshold Graph Limits and Random Threshold Graphs

    PubMed Central

    Diaconis, Persi; Holmes, Susan; Janson, Svante

    2010-01-01

    We study the limit theory of large threshold graphs and apply this to a variety of models for random threshold graphs. The results give a nice set of examples for the emerging theory of graph limits. PMID:20811581

  11. On the thresholds in modeling of high flows via artificial neural networks - A bootstrapping analysis

    NASA Astrophysics Data System (ADS)

    Panagoulia, D.; Trichakis, I.

    2012-04-01

    Considering the growing interest in simulating hydrological phenomena with artificial neural networks (ANNs), it is useful to figure out the potential and limits of these models. In this study, the main objective is to examine how to improve the ability of an ANN model to simulate extreme values of flow utilizing a priori knowledge of threshold values. A three-layer feedforward ANN was trained by using the back propagation algorithm and the logistic function as activation function. By using the thresholds, the flow was partitioned in low (x < μ), medium (μ ≤ x ≤ μ + 2σ) and high (x > μ + 2σ) values. The employed ANN model was trained for high flow partition and all flow data too. The developed methodology was implemented over a mountainous river catchment (the Mesochora catchment in northwestern Greece). The ANN model received as inputs pseudo-precipitation (rain plus melt) and previous observed flow data. After the training was completed the bootstrapping methodology was applied to calculate the ANN confidence intervals (CIs) for a 95% nominal coverage. The calculated CIs included only the uncertainty, which comes from the calibration procedure. The results showed that an ANN model trained specifically for high flows, with a priori knowledge of the thresholds, can simulate these extreme values much better (RMSE is 31.4% less) than an ANN model trained with all data of the available time series and using a posteriori threshold values. On the other hand the width of CIs increases by 54.9% with a simultaneous increase by 64.4% of the actual coverage for the high flows (a priori partition). The narrower CIs of the high flows trained with all data may be attributed to the smoothing effect produced from the use of the full data sets. Overall, the results suggest that an ANN model trained with a priori knowledge of the threshold values has an increased ability in simulating extreme values compared with an ANN model trained with all the data and a posteriori

  12. Threshold Dynamics in Stochastic SIRS Epidemic Models with Nonlinear Incidence and Vaccination.

    PubMed

    Wang, Lei; Teng, Zhidong; Tang, Tingting; Li, Zhiming

    2017-01-01

    In this paper, the dynamical behaviors for a stochastic SIRS epidemic model with nonlinear incidence and vaccination are investigated. In the models, the disease transmission coefficient and the removal rates are all affected by noise. Some new basic properties of the models are found. Applying these properties, we establish a series of new threshold conditions on the stochastically exponential extinction, stochastic persistence, and permanence in the mean of the disease with probability one for the models. Furthermore, we obtain a sufficient condition on the existence of unique stationary distribution for the model. Finally, a series of numerical examples are introduced to illustrate our main theoretical results and some conjectures are further proposed.

  13. A preliminary threshold model of parasitism in the Cockle Cerastoderma edule using delayed exchange of stability

    NASA Astrophysics Data System (ADS)

    O'Grady, E. A.; Culloty, S. C.; Kelly, T. C.; O'Callaghan, M. J. A.; Rachinskii, D.

    2015-02-01

    Thresholds occur, and play an important role, in the dynamics of many biological communities. In this paper, we model a persistence type threshold which has been shown experimentally to exist in hyperparasitised flukes in the cockle, a shellfish. Our model consists of a periodically driven slow-fast host-parasite system of equations for a slow flukes population (host) and a fast Unikaryon hyperparasite population (parasite). The model exhibits two branches of the critical curve crossing in a transcritical bifurcation scenario. We discuss two thresholds due to immediate and delayed exchange of stability effects; and we derive algebraic relationships for parameters of the periodic solution in the limit of the infinite ratio of the time scales. Flukes, which are the host species in our model, parasitise cockles and in turn are hyperparasitised by the microsporidian Unikaryon legeri; the life cycle of flukes includes several life stages and a number of different hosts. That is, the flukes-hyperparasite system in a cockle is, naturally, part of a larger estuarine ecosystem of interacting species involving parasites, shellfish and birds which prey on shellfish. A population dynamics model which accounts for one system of such multi-species interactions and includes the fluke-hyperparasite model in a cockle as a subsystem is presented. We provide evidence that the threshold effect we observed in the flukes-hyperparasite subsystem remains apparent in the multi-species system. Assuming that flukes damage cockles, and taking into account that the hyperparasite is detrimental to flukes, it is natural to suggest that the hyperparasitism may support the abundance of cockles and, thereby, the persistence of the estuarine ecosystem, including shellfish and birds. We confirm the possibility of the existence of this scenario in our model, at least partially, by removing the hyperparasite and demonstrating that this may result in a substantial drop in cockle numbers. The result

  14. Quasi-3D modeling of surface potential and threshold voltage of Triple Metal Quadruple Gate MOSFETs

    NASA Astrophysics Data System (ADS)

    Gupta, Santosh Kumar; Shah, Mihir Kumar P.

    2017-01-01

    In this paper we present electrostatic model of 3D Triple Metal Quadruple Gate (TMQG) MOSFET of rectangular cross-section based on quasi-3D method. The analytical equations for channel potential and characteristic length have been derived by decomposing TMQG into two 2D perpendicular cross-sections (triple metal double gate, TMDG) and the effective characteristic length of TMQG is found using equivalent number of gates (ENG) method. For each of the TMDG, 2D Poisson's equation is solved by parabolic approximation and proper boundary conditions to calculate channel potential. The threshold voltage expression is developed using inversion carrier charge sheet density method. The developed models for channel potential and threshold voltage are validated using numerical simulations of TMQG. The developed model provides the design guidelines for TMQG with improved HCEs and SCEs.

  15. Frequency analysis of tick quotes on foreign currency markets and the double-threshold agent model

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2006-09-01

    Power spectrum densities for the number of tick quotes per minute (market activity) on three currency markets (USD/JPY, EUR/USD, and JPY/EUR) are analyzed for periods from January 2000 to December 2000. We find some peaks on the power spectrum densities at a few minutes. We develop the double-threshold agent model and confirm that the corresponding periodicity can be observed for the activity of this model even though market participants perceive common weaker periodic information than threshold for decision-making of them. This model is numerically performed and theoretically investigated by utilizing the mean-field approximation. We propose a hypothesis that the periodicities found on the power spectrum densities can be observed due to nonlinearity and diversity of market participants.

  16. Threshold parameters for a model of epidemic spread among households and workplaces

    PubMed Central

    Pellis, L.; Ferguson, N. M.; Fraser, C.

    2009-01-01

    The basic reproduction number R 0 is one of the most important concepts in modern infectious disease epidemiology. However, for more realistic and more complex models than those assuming homogeneous mixing in the population, other threshold quantities can be defined that are sometimes more useful and easily derived in terms of model parameters. In this paper, we present a model for the spread of a permanently immunizing infection in a population socially structured into households and workplaces/schools, and we propose and discuss a new household-to-household reproduction number R H for it. We show how R H overcomes some of the limitations of a previously proposed threshold parameter, and we highlight its relationship with the effort required to control an epidemic when interventions are targeted at randomly selected households. PMID:19324683

  17. Threshold voltage model of junctionless cylindrical surrounding gate MOSFETs including fringing field effects

    NASA Astrophysics Data System (ADS)

    Gupta, Santosh Kumar

    2015-12-01

    2D Analytical model of the body center potential (BCP) in short channel junctionless Cylindrical Surrounding Gate (JLCSG) MOSFETs is developed using evanescent mode analysis (EMA). This model also incorporates the gate bias dependent inner and outer fringing capacitances due to the gate-source/drain fringing fields. The developed model provides results in good agreement with simulated results for variations of different physical parameters of JLCSG MOSFET viz. gate length, channel radius, doping concentration, and oxide thickness. Using the BCP, an analytical model for the threshold voltage has been derived and validated against results obtained from 3D device simulator.

  18. The Translation Invariant Massive Nelson Model: III. Asymptotic Completeness Below the Two-Boson Threshold

    NASA Astrophysics Data System (ADS)

    Dybalski, Wojciech; Møller, Jacob Schach

    2015-11-01

    We show asymptotic completeness of two-body scattering for a class of translation invariant models describing a single quantum particle (the electron) linearly coupled to a massive scalar field (bosons). Our proof is based on a recently established Mourre estimate for these models. In contrast to previous approaches, it requires no number cutoff, no restriction on the particle-field coupling strength, and no restriction on the magnitude of total momentum. Energy, however, is restricted by the two-boson threshold, admitting only scattering of a dressed electron and a single asymptotic boson. The class of models we consider include the UV-cutoff Nelson and polaron models.

  19. Computational model of collective nest selection by ants with heterogeneous acceptance thresholds.

    PubMed

    Masuda, Naoki; O'shea-Wheller, Thomas A; Doran, Carolina; Franks, Nigel R

    2015-06-01

    Collective decision-making is a characteristic of societies ranging from ants to humans. The ant Temnothorax albipennis is known to use quorum sensing to collectively decide on a new home; emigration to a new nest site occurs when the number of ants favouring the new site becomes quorate. There are several possible mechanisms by which ant colonies can select the best nest site among alternatives based on a quorum mechanism. In this study, we use computational models to examine the implications of heterogeneous acceptance thresholds across individual ants in collective nest choice behaviour. We take a minimalist approach to develop a differential equation model and a corresponding non-spatial agent-based model. We show, consistent with existing empirical evidence, that heterogeneity in acceptance thresholds is a viable mechanism for efficient nest choice behaviour. In particular, we show that the proposed models show speed-accuracy trade-offs and speed-cohesion trade-offs when we vary the number of scouts or the quorum threshold.

  20. Global threshold dynamics of an SIVS model with waning vaccine-induced immunity and nonlinear incidence.

    PubMed

    Yang, Junyuan; Martcheva, Maia; Wang, Lin

    2015-10-01

    Vaccination is the most effective method of preventing the spread of infectious diseases. For many diseases, vaccine-induced immunity is not life long and the duration of immunity is not always fixed. In this paper, we propose an SIVS model taking the waning of vaccine-induced immunity and general nonlinear incidence into consideration. Our analysis shows that the model exhibits global threshold dynamics in the sense that if the basic reproduction number is less than 1, then the disease-free equilibrium is globally asymptotically stable implying the disease dies out; while if the basic reproduction number is larger than 1, then the endemic equilibrium is globally asymptotically stable indicating that the disease persists. This global threshold result indicates that if the vaccination coverage rate is below a critical value, then the disease always persists and only if the vaccination coverage rate is above the critical value, the disease can be eradicated.

  1. Threshold for chaos and thermalization in the one-dimensional mean-field bose-hubbard model.

    PubMed

    Cassidy, Amy C; Mason, Douglas; Dunjko, Vanja; Olshanii, Maxim

    2009-01-16

    We study the threshold for chaos and its relation to thermalization in the 1D mean-field Bose-Hubbard model, which, in particular, describes atoms in optical lattices. We identify the threshold for chaos, which is finite in the thermodynamic limit, and show that it is indeed a precursor of thermalization. Far above the threshold, the state of the system after relaxation is governed by the usual laws of statistical mechanics.

  2. Genetic evaluation of calf and heifer survival in Iranian Holstein cattle using linear and threshold models.

    PubMed

    Forutan, M; Ansari Mahyari, S; Sargolzaei, M

    2015-02-01

    Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection.

  3. Threshold Models for Genome-Enabled Prediction of Ordinal Categorical Traits in Plant Breeding

    PubMed Central

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de los Campos, Gustavo; Eskridge, Kent; Crossa, José

    2014-01-01

    Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9–14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. PMID:25538102

  4. Modeling aeolian sediment transport thresholds on physically rough Martian surfaces: A shear stress partitioning approach

    NASA Astrophysics Data System (ADS)

    Gillies, John A.; Nickling, William G.; King, James; Lancaster, Nicholas

    2010-09-01

    This paper explores the effect that large roughness elements (0.30 m × 0.26 m × 0.36 m) may have on entrainment of sediment by Martian winds using a shear stress partitioning approach based on a model developed by Raupach et al. (Raupach, M.R., Gillette, D.A., Leys, J.F., 1993. The effect of roughness elements on wind erosion threshold. Journal of Geophysical Research 98(D2), 3023-3029). This model predicts the shear stress partitioning ratio defined as the percent reduction in shear stress on the intervening surface between the roughness elements as compared to the surface in the absence of those elements. This ratio is based on knowledge of the geometric properties of the roughness elements, the characteristic drag coefficients of the elements and the surface, and the assumed effect these elements have on the spatial distribution of the mean and maximum shear stresses. On Mars, unlike on Earth, the shear stress partitioning caused by roughness can be non-linear in that the drag coefficients for the surface as well as for the roughness itself show Reynolds number dependencies for the reported range of Martian wind speeds. The shear stress partitioning model of Raupach et al. is used to evaluate how conditions of the Martian atmosphere will affect the threshold shear stress ratio for Martian surfaces over a range of values of roughness density. Using, as an example, a 125 µm diameter particle with an estimated threshold shear stress on Mars of ≈ 0.06 N m - 2 (shear velocity, u* ≈ 2 m s - 1 on a smooth surface), we evaluate the effect of roughness density on the threshold shear stress ratio for this diameter particle. In general, on Mars higher regional shear stresses are required to initiate particle entrainment for surfaces that have the same physical roughness as defined by the roughness density term ( λ) compared with terrestrial surfaces mainly because of the low Martian atmospheric density.

  5. Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.

  6. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures

    PubMed Central

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878

  7. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.

    PubMed

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.

  8. Empirical scalings and modeling of error field penetration thresholds in tokamaks

    NASA Astrophysics Data System (ADS)

    Schaefer, C.; Lanctot, M. J.; Meneghini, O.; Smith, S. P.; Logan, N. C.; Haskey, S.

    2016-10-01

    Recent experiments in several tokamaks show that applied n=2 fields can lead to disruptive n=1 locked modes at field thresholds similar to those found for n=1 fields. This has important implications for the allowable size of error fields in next-step devices. In order to extrapolate field thresholds to ITER, an error field database (EFDB) is being developed under the OMFIT integrated modeling framework. The initial phase of development involves analysis of the applied 3D field, detection of island onset, characterization of island structure, reconstruction of the plasma equilibrium, determination of measurable plasma parameters at the relevant rational surfaces, and archiving in a dedicated MDSplus tree. The EFDB is both an extension of previous data assembly efforts and a means of documenting the parametric dependencies of error field penetration thresholds for a variety of tokamaks, across different plasma regimes, and for arbitrary applied field configurations. Through analysis of available data, empirical scalings for n=1 and n=2 fields are resolved. The trends are compared to functional dependencies predicted by drift-MHD models. Work supported by the US Department of Energy under the Science Undergraduate Laboratory Internship (SULI) program, DE-FC02-04ER54698 and DE-AC52-07NA27344.

  9. Above-threshold numerical modeling of high-index-contrast photonic-crystal quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Napartovich, A. P.; Elkin, N. N.; Vysotsky, D. V.; Kirch, J.; Sigler, C.; Botez, D.; Mawst, L. J.; Belyanin, A.

    2015-03-01

    Three-dimensional above-threshold analyses of high-index-contrast (HC) photonic-crystal (PC) quantum-cascade-laser arrays (QCLA) structures, for operation at watt-range CW powers in a single spatial mode, have been performed. Threeelement HC-PC structures are formed by alternating active- antiguided and passive-guided regions along with respective metal-electrode spatial profiling. The 3-D numerical code takes into account absorption and edge-radiation losses. Rigrod's approximation is used for the gain. The specific feature of QCLA is that only the transverse component of the magnetic field sees the gain. Results of above-threshold laser modeling in various approximate versions of laser-cavity description are compared with the results of linear, full-vectorial modeling by using the COMSOL package. Additionally, modal gains for several higher-order optical modes, on a `frozen gain background' produced by the fundamental-mode, are computed by the Arnoldi algorithm. The gain spatial-hole burning effect results in growth of the competing modes' gain with drive current. Approaching the lasing threshold for a competing higher-order mode sets a limit on the single-mode operation range. The modal structure and stability are studied over a wide range in the variation of the inter-element widths. Numerical analyses predict that the proper choice of construction parameters ensures stable single-mode operation at high drive levels above threshold. The output power from a single- mode operated QCLA at a wavelength of 4.7 μm is predicted to be available at multi-watt levels, although this power may be restricted by thermal effects.

  10. Electric Field Model of Transcranial Electric Stimulation in Nonhuman Primates: Correspondence to Individual Motor Threshold

    PubMed Central

    Lee, Won Hee; Lisanby, Sarah H.; Laine, Andrew F.

    2015-01-01

    Objective To develop a pipeline for realistic head models of nonhuman primates (NHPs) for simulations of noninvasive brain stimulation, and use these models together with empirical threshold measurements to demonstrate that the models capture individual anatomical variability. Methods Based on structural MRI data, we created models of the electric field (E-field) induced by right unilateral (RUL) electroconvulsive therapy (ECT) in four rhesus macaques. Individual motor threshold (MT) was measured with transcranial electric stimulation (TES) administered through the RUL electrodes in the same subjects. Results The interindividual anatomical differences resulted in 57% variation in median E-field strength in the brain at fixed stimulus current amplitude. Individualization of the stimulus current by MT reduced the E-field variation in the target motor area by 27%. There was significant correlation between the measured MT and the ratio of simulated electrode current and E-field strength (r2 = 0.95, p = 0.026). Exploratory analysis revealed significant correlations of this ratio with anatomical parameters including of the superior electrode-to-cortex distance, vertex-to-cortex distance, and brain volume (r2 > 0.96, p < 0.02). The neural activation threshold was estimated to be 0.45 ± 0.07 V/cm for 0.2 ms stimulus pulse width. Conclusion These results suggest that our individual-specific NHP E-field models appropriately capture individual anatomical variability relevant to the dosing of TES/ECT. These findings are exploratory due to the small number of subjects. Significance This work can contribute insight in NHP studies of ECT and other brain stimulation interventions, help link the results to clinical studies, and ultimately lead to more rational brain stimulation dosing paradigms. PMID:25910001

  11. A model to predict threshold concentrations for toxic effects of chlorinated benzenes in sediment

    SciTech Connect

    Fuchsman, P.C.; Duda, D.J.; Barber, T.R.

    1999-09-01

    A probabilistic model was developed to predict effects threshold concentrations for chlorinated benzenes in sediment. Based on published quantitative structure-activity relationships relating the toxicity of chlorinated benzenes to the degree of chlorination, congeners with the same number of chlorine substitutions were considered toxicologically equivalent. Hexachlorobenzene was excluded from the assessment based on a lack of aquatic toxicity at the water solubility limit. The equilibrium partitioning approach was applied in a probabilistic analysis to derive predicted effects thresholds (PETs) for each chlorinated benzene group, with model input distributions defined by published log K{sub ow} values and aquatic toxicity data extracted from the published literature. The probabilistic distributions of PETs generally increased with chlorination, with 20th percentile values ranging from 3.2 mg/kg{sub 1{degree}OC} for chlorobenzene to 67 mg/kg{sub 1%OC} for tetrachlorobenzene congeners. The toxicity of total chlorinated benzenes in sediment can be assessed by applying the PETs in a toxic index model, based on the assumption that multiple chlorinated benzene congeners will show approximately additive toxicity, as characteristic of nonpolar narcotic toxicants. The 20th percentile PET values are one to two orders of magnitude higher than published screening-level guidelines, suggesting that the screening-level guidelines will provide overly conservative assessments in most cases. Relevant spiked sediment toxicity data are very limited but seem consistent with the probabilistic model; additional testing could be conducted to confirm the model's predictions.

  12. Threshold Dynamics in Stochastic SIRS Epidemic Models with Nonlinear Incidence and Vaccination

    PubMed Central

    Wang, Lei; Tang, Tingting

    2017-01-01

    In this paper, the dynamical behaviors for a stochastic SIRS epidemic model with nonlinear incidence and vaccination are investigated. In the models, the disease transmission coefficient and the removal rates are all affected by noise. Some new basic properties of the models are found. Applying these properties, we establish a series of new threshold conditions on the stochastically exponential extinction, stochastic persistence, and permanence in the mean of the disease with probability one for the models. Furthermore, we obtain a sufficient condition on the existence of unique stationary distribution for the model. Finally, a series of numerical examples are introduced to illustrate our main theoretical results and some conjectures are further proposed. PMID:28194223

  13. A Modified Mechanical Threshold Stress Constitutive Model for Austenitic Stainless Steels

    NASA Astrophysics Data System (ADS)

    Prasad, K. Sajun; Gupta, Amit Kumar; Singh, Yashjeet; Singh, Swadesh Kumar

    2016-12-01

    This paper presents a modified mechanical threshold stress (m-MTS) constitutive model. The m-MTS model incorporates variable athermal and dynamic strain aging (DSA) Components to accurately predict the flow stress behavior of austenitic stainless steels (ASS)-316 and 304. Under strain rate variations between 0.01-0.0001 s-1, uniaxial tensile tests were conducted at temperatures ranging from 50-650 °C to evaluate the material constants of constitutive models. The test results revealed the high dependence of flow stress on strain, strain rate and temperature. In addition, it was observed that DSA occurred at elevated temperatures and very low strain rates, causing an increase in flow stress. While the original MTS model is capable of predicting the flow stress behavior for ASS, statistical parameters point out the inefficiency of the model when compared to other models such as Johnson Cook model, modified Zerilli-Armstrong (m-ZA) model, and modified Arrhenius-type equations (m-Arr). Therefore, in order to accurately model both the DSA and non-DSA regimes, the original MTS model was modified by incorporating variable athermal and DSA components. The suitability of the m-MTS model was assessed by comparing the statistical parameters. It was observed that the m-MTS model was highly accurate for the DSA regime when compared to the existing models. However, models like m-ZA and m-Arr showed better results for the non-DSA regime.

  14. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    PubMed Central

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-01-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830

  15. Computationally Efficient Implementation of a Novel Algorithm for the General Unified Threshold Model of Survival (GUTS)

    PubMed Central

    Albert, Carlo; Vogel, Sören

    2016-01-01

    The General Unified Threshold model of Survival (GUTS) provides a consistent mathematical framework for survival analysis. However, the calibration of GUTS models is computationally challenging. We present a novel algorithm and its fast implementation in our R package, GUTS, that help to overcome these challenges. We show a step-by-step application example consisting of model calibration and uncertainty estimation as well as making probabilistic predictions and validating the model with new data. Using self-defined wrapper functions, we show how to produce informative text printouts and plots without effort, for the inexperienced as well as the advanced user. The complete ready-to-run script is available as supplemental material. We expect that our software facilitates novel re-analysis of existing survival data as well as asking new research questions in a wide range of sciences. In particular the ability to quickly quantify stressor thresholds in conjunction with dynamic compensating processes, and their uncertainty, is an improvement that complements current survival analysis methods. PMID:27340823

  16. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    NASA Astrophysics Data System (ADS)

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-12-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails.

  17. Spinodals, scaling, and ergodicity in a threshold model with long-range stress transfer.

    PubMed

    Ferguson, C D; Klein, W; Rundle, J B

    1999-08-01

    We present both theoretical and numerical analyses of a cellular automaton version of a slider-block model or threshold model that includes long-range interactions. Theoretically we develop a coarse-grained description in the mean-field (infinite range) limit and discuss the relevance of the metastable state, limit of stability (spinodal), and nucleation to the phenomenology of the model. We also simulate the model and confirm the relevance of the theory for systems with long- but finite-range interactions. Results of particular interest include the existence of Gutenberg-Richter-like scaling consistent with that found on real earthquake fault systems, the association of large events with nucleation near the spinodal, and the result that such systems can be described, in the mean-field limit, with techniques appropriate to systems in equilibrium.

  18. Evaluating intercepts from demographic models to understand resource limitation and resource thresholds

    USGS Publications Warehouse

    Reynolds-Hogland, M. J.; Hogland, J.S.; Mitchell, M.S.

    2008-01-01

    Understanding resource limitation is critical to effective management and conservation of wild populations, however resource limitation is difficult to quantify partly because resource limitation is a dynamic process. Specifically, a resource that is limiting at one time may become non-limiting at another time, depending upon changes in its availability and changes in the availability of other resources. Methods for understanding resource limitation, therefore, must consider the dynamic effects of resources on demography. We present approaches for interpreting results of demographic modeling beyond analyzing model rankings, model weights, slope estimates, and model averaging. We demonstrate how interpretation of y-intercepts, odds ratios, and rates of change can yield insights into resource limitation as a dynamic process, assuming logistic regression is used to link estimates of resources with estimates of demography. In addition, we show how x-intercepts can be evaluated with respect to odds ratios to understand resource thresholds. ?? 2007 Elsevier B.V. All rights reserved.

  19. A continuous damage random thresholds model for simulating the fracture behavior of nacre.

    PubMed

    Nukala, Phani K V V; Simunovic, Srdan

    2005-10-01

    This study investigates the fracture properties of nacre using a discrete lattice model based on continuous damage random threshold fuse network. The discrete lattice topology of the model is based on nacre's unique brick and mortar microarchitecture. The mechanical behavior of each of the bonds in the discrete lattice model is governed by the characteristic modular damage evolution of the organic matrix and the mineral bridges between the aragonite platelets. The numerical results obtained using this simple discrete lattice model are in very good agreement with the previously obtained experimental results, such as nacre's stiffness, tensile strength, and work of fracture. The analysis indicates that nacre's superior toughness is a direct consequence of ductility (maximum shear strain) of the organic matrix in terms of repeated unfolding of protein molecules, and its fracture strength is a result of its ordered brick and mortar architecture with significant overlap of the platelets, and shear strength of the organic matrix.

  20. On the underlying assumptions of threshold Boolean networks as a model for genetic regulatory network behavior

    PubMed Central

    Tran, Van; McCall, Matthew N.; McMurray, Helene R.; Almudevar, Anthony

    2013-01-01

    Boolean networks (BoN) are relatively simple and interpretable models of gene regulatory networks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks (GRN). We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN). Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled. We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions. Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles. PMID:24376454

  1. Effect of microgravity on visual contrast threshold during STS Shuttle missions: Visual Function Tester-Model 2 (VFT-2)

    NASA Technical Reports Server (NTRS)

    Oneal, Melvin R.; Task, H. Lee; Genco, Louis V.

    1992-01-01

    Viewgraphs on effect of microgravity on visual contrast threshold during STS shuttle missions are presented. The purpose, methods, and results are discussed. The visual function tester model 2 is used.

  2. Modeling the residual effects and threshold saturation of training: a case study of Olympic swimmers.

    PubMed

    Hellard, Philippe; Avalos, Marta; Millet, Gregoire; Lacoste, Lucien; Barale, Frederic; Chatard, Jean-Claude

    2005-02-01

    The aim of this study was to model the residual effects of training on the swimming performance and to compare a model that includes threshold saturation (MM) with the Banister model (BM). Seven Olympic swimmers were studied over a period of 4 +/- 2 years. For 3 training loads (low-intensity w(LIT), high-intensity w(HIT), and strength training w(ST)), 3 residual training effects were determined: short-term (STE) during the taper phase (i.e., 3 weeks before the performance [weeks 0, 1, and 2]), intermediate-term (ITE) during the intensity phase (weeks 3, 4, and 5), and long-term (LTE) during the volume phase (weeks 6, 7, and 8). ITE and LTE were positive for w(HIT) and w(LIT), respectively (p < 0.05). Low-intensity training load during taper was related to performances by a parabolic relationship (p < 0.05). Different quality measures indicated that MM compares favorably with BM. Identifying individual training thresholds may help individualize the distribution of training loads.

  3. Modeling the residual effects and threshold saturation of training: a case study of Olympic swimmers

    PubMed Central

    Hellard, Philippe; Avalos, Marta; Millet, Grégoire; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude

    2005-01-01

    The aim of this study was to model the residual effects of training on the swimming performance and to compare a model including threshold saturation (MM) to the Banister model (BM). Seven Olympic swimmers were studied over a period of 4 ± 2 years. For three training loads (low-intensity wLIT, high-intensity wHIT and strength training wST), three residual training effects were determined: short-term (STE) during the taper phase, i.e. three weeks before the performance (weeks 0, −1, −2), intermediate-term (ITE) during the intensity phase (weeks −3, −4 and −5) and long-term (LTE) during the volume phase (weeks −6, −7, −8). ITE and LTE were positive for wHIT and wLIT, respectively (P < 0.05). wLIT during taper was related to performances by a parabolic relationship (P < 0.05). Different quality measures indicated that MM compares favorably with BM. Identifying individual training thresholds may help individualizing the distribution of training loads. PMID:15705048

  4. Modeling of surface thermodynamics and damage thresholds in the IR and THz regime

    NASA Astrophysics Data System (ADS)

    Clark, C. D., III; Thomas, Robert J.; Maseberg, Paul D. S.; Buffington, Gavin D.; Irvin, Lance J.; Stolarski, Jacob; Rockwell, Benjamin A.

    2007-02-01

    The Air Force Research Lab has developed a configurable, two-dimensional, thermal model to predict laser-tissue interactions, and to aid in predictive studies for safe exposure limits. The model employs a finite-difference, time-dependent method to solve the two-dimensional cylindrical heat equation (radial and axial) in a biological system construct. Tissues are represented as multi-layer structures, with optical and thermal properties defined for each layer, are homogeneous throughout the layer. Multiple methods for computing the source term for the heat equation have been implemented, including simple linear absorption definitions and full beam propagation through finite-difference methods. The model predicts the occurrence of thermal damage sustained by the tissue, and can also determine damage thresholds for total optical power delivered to the tissue. Currently, the surface boundary conditions incorporate energy loss through free convection, surface radiation, and evaporative cooling. Implementing these boundary conditions is critical for correctly calculating the surface temperature of the tissue, and, therefore, damage thresholds. We present an analysis of the interplay between surface boundary conditions, ambient conditions, and blood perfusion within tissues.

  5. Load redistribution rules for progressive failure in shallow landslides: Threshold mechanical models

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2017-01-01

    Rainfall-induced landslides are often preceded by progressive failures that culminate in abrupt mass release. Local failure progression is captured by a landslide hydro-mechanical triggering model that represents the soil mantle as interacting columns linked by tensile and compressive mechanical "bonds." Mechanical bonds may fail at a prescribed threshold leaving a modeling challenge of how to redistribute their load to neighboring intact soil columns. We employed an elastic spring-block model to analytically derive redistribution rules defined by the stiffness ratio of compressive to tensile bonds. These linear-elastic rules were generalized to real soil using measurable Young's modulus and Poisson's ratio. Results indicate that "local" failure characteristics of ductile-like soils (e.g., clay) are reproduced by low stiffness ratios, whereas "global" failure of brittle sandy soils corresponds to large stiffness ratios. Systematic analyses yield guidelines for selecting load redistribution rules for failure of geological materials and mass-movement phenomena represented by discrete threshold-mechanics.

  6. Threshold conditions for integrated pest management models with pesticides that have residual effects.

    PubMed

    Tang, Sanyi; Liang, Juhua; Tan, Yuanshun; Cheke, Robert A

    2013-01-01

    Impulsive differential equations (hybrid dynamical systems) can provide a natural description of pulse-like actions such as when a pesticide kills a pest instantly. However, pesticides may have long-term residual effects, with some remaining active against pests for several weeks, months or years. Therefore, a more realistic method for modelling chemical control in such cases is to use continuous or piecewise-continuous periodic functions which affect growth rates. How to evaluate the effects of the duration of the pesticide residual effectiveness on successful pest control is key to the implementation of integrated pest management (IPM) in practice. To address these questions in detail, we have modelled IPM including residual effects of pesticides in terms of fixed pulse-type actions. The stability threshold conditions for pest eradication are given. Moreover, effects of the killing efficiency rate and the decay rate of the pesticide on the pest and on its natural enemies, the duration of residual effectiveness, the number of pesticide applications and the number of natural enemy releases on the threshold conditions are investigated with regard to the extent of depression or resurgence resulting from pulses of pesticide applications and predator releases. Latin Hypercube Sampling/Partial Rank Correlation uncertainty and sensitivity analysis techniques are employed to investigate the key control parameters which are most significantly related to threshold values. The findings combined with Volterra's principle confirm that when the pesticide has a strong effect on the natural enemies, repeated use of the same pesticide can result in target pest resurgence. The results also indicate that there exists an optimal number of pesticide applications which can suppress the pest most effectively, and this may help in the design of an optimal control strategy.

  7. Threshold fluctuations in an N sodium channel model of the node of Ranvier.

    PubMed Central

    Rubinstein, J T

    1995-01-01

    Computer simulations of stochastic single-channel open-close kinetics are applied to an N sodium channel model of a node of Ranvier. Up to 32,000 voltage-gated sodium channels have been simulated with modified amphibian sodium channel kinetics. Poststimulus time histograms are obtained with 1000 monophasic pulse stimuli, and measurements are made of changes in the relative spread of threshold (RS) with changes in the model parameters. RS is found to be invariant with pulse durations from 100 microseconds to 3 ms. RS is approximately of inverse proportion to square-root of N. It decreases with increasing temperature and is dependent on passive electrical properties of the membrane as well as the single-channel conductance. The simulated RS and its independence of pulse duration is consistent with experimental results from the literature. Thus, the microscopic fluctuations of single, voltage-sensitive sodium channels in the amphibian peripheral node of Ranvier are sufficient to account for the macroscopic fluctuation if threshold to electrical stimulation. PMID:7756544

  8. Experimental confirmation of the polygyny threshold model for red-winged blackbirds.

    PubMed

    Pribil, S; Searcy, W A

    2001-08-07

    The polygyny threshold model assumes that polygynous mating is costly to females and proposes that females pay the cost of polygyny only when compensated by obtaining a superior territory or male. We present, to the authors' knowledge, the first experimental field test to demonstrate that females trade mating status against territory quality as proposed by this hypothesis. Previous work has shown that female red-winged blackbirds (Agelaius phoeniceus) in Ontario prefer settling with unmated males and that this preference is adaptive because polygynous mating status lowers female reproductive success. Other evidence suggests that nesting over water increases the reproductive success of female red-winged blackbirds. Here we describe an experiment in which females were given choices between two adjacent territories, one owned by an unmated male without any over-water nesting sites and the other by an already-mated male with over-water sites. Females overwhelmingly preferred the already-mated males, demonstrating that superior territory quality can reverse preferences based on mating status and supporting the polygyny threshold model as the explanation for polygyny in this population.

  9. National evaluation for calving ease, gestation length and birth weight by linear and threshold model methodologies.

    PubMed

    Lee, Deukhwan; Misztal, Ignacy; Bertrand, J Keith; Rekaya, Romdhane

    2002-01-01

    Data included 393,097 calving ease, 129,520 gestation length, and 412,484 birth weight records on 412,484 Gelbvieh cattle. Additionally, pedigrees were available on 72,123 animals. Included in the models were effects of sex and age of dam, treated as fixed, as well as direct, maternal genetic and permanent environmental effects and effects of contemporary group (herd-year-season), treated as random. In all analyses, birth weight and gestation length were treated as continuous traits. Calving ease (CE) was treated either as a continuous trait in a mixed linear model (LM), or as a categorical trait in linear-threshold models (LTM). Solutions in TM obtained by empirical Bayes (TMEB) and Monte Carlo (TMMC) methodologies were compared with those by LM. Due to the computational cost, only 10,000 samples were obtained for TMMC. For calving ease, correlations between LM and TMEB were 0.86 and 0.78 for direct and maternal genetic effects, respectively. The same correlations but between TMEB and TMMC were 1.00 and 0.98, respectively. The correlations between LM and TMMC were 0.85 and 0.75, respectively. The correlations for the linear traits were above.97 between LM and TMEB but as low as 0.91 between LM and TMMC, suggesting insufficient convergence of TMMC. Computing time required was about 2 hrs, 5 hrs, and 6 days for LM, TMEB and TMMC, respectively, and memory requirements were 169, 171, and 445 megabytes, respectively. Bayesian implementation of threshold model is simple, can be extended to multiple categorical traits, and allows easy calculation of accuracies; however, computing time is prohibitively long for large models.

  10. Temperature thresholds and degree-day model for Marmara gulosa (Lepidoptera: Gracillariidae).

    PubMed

    O'Neal, M J; Headrick, D H; Montez, Gregory H; Grafton-Cardwell, E E

    2011-08-01

    The developmental thresholds for Marmara gulosa Guillén & Davis (Lepidoptera: Gracillariidae) were investigated in the laboratory by using 17, 21, 25, 29, and 33 degrees C. The lowest mortality occurred in cohorts exposed to 25 and 29 degrees C. Other temperatures caused >10% mortality primarily in egg and first and second instar sap-feeding larvae. Linear regression analysis approximated the lower developmental threshold at 12.2 degrees C. High mortality and slow developmental rate at 33 degrees C indicate the upper developmental threshold is near this temperature. The degree-day (DD) model indicated that a generation requires an accumulation of 322 DD for development from egg to adult emergence. Average daily temperatures in the San Joaquin Valley could produce up to seven generations of M. gulosa per year. Field studies documented two, five, and three overlapping generations of M. gulosa in walnuts (Juglans regia L.; Juglandaceae), pummelos (Citrus maxima (Burm.) Merr.; Rutaceae), and oranges (Citrus sinensis (L.) Osbeck; Rutaceae), for a total of seven observed peelminer generations. Degree-day units between generations averaged 375 DD for larvae infesting walnut twigs; however, availability of green wood probably affected timing of infestations. Degree-day units between larval generations averaged 322 for pummelos and 309 for oranges, confirming the laboratory estimation. First infestation of citrus occurred in June in pummelo fruit and August in orange fruit when fruit neared 60 mm in diameter. Fruit size and degree-day units could be used as management tools to more precisely time insecticide treatments to target the egg stage and prevent rind damage to citrus. Degree-day units also could be used to more precisely time natural enemy releases to target larval instars that are preferred for oviposition.

  11. Caregiving appraisal and interventions based on the progressively lowered stress threshold model.

    PubMed

    Stolley, Jacqueline M; Reed, David; Buckwalter, K C

    2002-01-01

    The purpose of this article is to describe the impact of a theoretically driven, psychoeducational intervention based on the Progressively Lowered Stress Threshold (PLST) model on caregiving appraisal among community-based caregivers of persons with Alzheimer's disease and related disorders. A total of 241 subjects completed the year-long study in four sites in Iowa, Minnesota, Indiana, and Arizona. Caregiving appraisal was measured using the four factors of the Philadelphia Geriatric Center Caregiving Appraisal Scale: mastery, burden, satisfaction, and impact. Analysis of trends over time showed that the intervention positively affected impact, burden, and satisfaction but had no effect on mastery when measured against the comparison group. The PLST model was influential in increasing positive appraisal and decreasing negative appraisal of the caregiving situation.

  12. Catastrophic shifts and lethal thresholds in a propagating front model of unstable tumor progression.

    PubMed

    Amor, Daniel R; Solé, Ricard V

    2014-08-01

    Unstable dynamics characterizes the evolution of most solid tumors. Because of an increased failure of maintaining genome integrity, a cumulative increase in the levels of gene mutation and loss is observed. Previous work suggests that instability thresholds to cancer progression exist, defining phase transition phenomena separating tumor-winning scenarios from tumor extinction or coexistence phases. Here we present an integral equation approach to the quasispecies dynamics of unstable cancer. The model exhibits two main phases, characterized by either the success or failure of cancer tissue. Moreover, the model predicts that tumor failure can be due to either a reduced selective advantage over healthy cells or excessive instability. We also derive an approximate, analytical solution that predicts the front speed of aggressive tumor populations on the instability space.

  13. Catastrophic shifts and lethal thresholds in a propagating front model of unstable tumor progression

    NASA Astrophysics Data System (ADS)

    Amor, Daniel R.; Solé, Ricard V.

    2014-08-01

    Unstable dynamics characterizes the evolution of most solid tumors. Because of an increased failure of maintaining genome integrity, a cumulative increase in the levels of gene mutation and loss is observed. Previous work suggests that instability thresholds to cancer progression exist, defining phase transition phenomena separating tumor-winning scenarios from tumor extinction or coexistence phases. Here we present an integral equation approach to the quasispecies dynamics of unstable cancer. The model exhibits two main phases, characterized by either the success or failure of cancer tissue. Moreover, the model predicts that tumor failure can be due to either a reduced selective advantage over healthy cells or excessive instability. We also derive an approximate, analytical solution that predicts the front speed of aggressive tumor populations on the instability space.

  14. Binary threshold networks as a natural null model for biological networks.

    PubMed

    Rybarsch, Matthias; Bornholdt, Stefan

    2012-08-01

    Spin models of neural networks and genetic networks are considered elegant as they are accessible to statistical mechanics tools for spin glasses and magnetic systems. However, the conventional choice of variables in spin systems may cause problems in some models when parameter choices are unrealistic from a biological perspective. Obviously, this may limit the role of a model as a template model for biological systems. Perhaps less obviously, also ensembles of random networks are affected and may exhibit different critical properties. We consider here a prototypical network model that is biologically plausible in its local mechanisms. We study a discrete dynamical network with two characteristic properties: Nodes with binary states 0 and 1, and a modified threshold function with Θ(0)(0)=0. We explore the critical properties of random networks of such nodes and find a critical connectivity K(c)=2.0 with activity vanishing at the critical point. Finally, we observe that the present model allows a more natural implementation of recent models of budding yeast and fission yeast cell-cycle control networks.

  15. The minimal SUSY B - L model: simultaneous Wilson lines and string thresholds

    NASA Astrophysics Data System (ADS)

    Deen, Rehan; Ovrut, Burt A.; Purves, Austin

    2016-07-01

    In previous work, we presented a statistical scan over the soft supersymmetry breaking parameters of the minimal SUSY B - L model. For specificity of calculation, unification of the gauge parameters was enforced by allowing the two Z_3× Z_3 Wilson lines to have mass scales separated by approximately an order of magnitude. This introduced an additional "left-right" sector below the unification scale. In this paper, for three important reasons, we modify our previous analysis by demanding that the mass scales of the two Wilson lines be simultaneous and equal to an "average unification" mass < M U >. The present analysis is 1) more "natural" than the previous calculations, which were only valid in a very specific region of the Calabi-Yau moduli space, 2) the theory is conceptually simpler in that the left-right sector has been removed and 3) in the present analysis the lack of gauge unification is due to threshold effects — particularly heavy string thresholds, which we calculate statistically in detail. As in our previous work, the theory is renormalization group evolved from < M U > to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ˜125 GeV. The subspace of soft supersymmetry breaking masses that satisfies all such constraints is presented and shown to be substantial.

  16. Thresholds in Atmosphere-Soil Moisture Interactions: Results from Climate Model Studies

    NASA Technical Reports Server (NTRS)

    Oglesby, Robert J.; Marshall, Susan; Erickson, David J., III; Roads, John O.; Robertson, Franklin R.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The potential predictability of the effects of warm season soil moisture anomalies over the central U.S. has been investigated using a series of GCM (Global Climate Model) experiments with the NCAR (National Center for Atmospheric Research) CCM3 (Community Climate Model version 3)/LSM (Land Surface Model). Three different types of experiments have been made, all starting in either March (representing precursor conditions) or June (conditions at the onset of the warm season): (1) 'anomaly' runs with large, exaggerated initial soil moisture reductions, aimed at evaluating the physical mechanisms by which soil moisture can affect the atmosphere; (2) 'predictability' runs aimed at evaluating whether typical soil moisture initial anomalies (indicative of year-to-year variability) can have a significant effect, and if so, for how long; (3) 'threshold' runs aimed at evaluating if a soil moisture anomaly must be of a specific size (i.e., a threshold crossed) before a significant impact on the atmosphere is seen. The 'anomaly' runs show a large, long-lasting response in soil moisture and also quantities such as surface temperature, sea level pressure, and precipitation; effects persist for at least a year. The 'predictability' runs, on the other hand, show very little impact of the initial soil moisture anomalies on the subsequent evolution of soil moisture and other atmospheric parameters; internal variability is most important, with the initial state of the atmosphere (representing remote effects such as SST anomalies) playing a more minor role. The 'threshold' runs, devised to help resolve the dichotomy in 'anomaly' and 'predictability' results, suggest that, at least in CCM3/LSM, the vertical profile of soil moisture is the most important factor, and that deep soil zone anomalies exert a more powerful, long-lasting effect than do anomalies in the near surface soil zone. We therefore suggest that soil moisture feedbacks may be more important in explaining prolonged

  17. Numerical modeling of rainfall thresholds for shallow landsliding in the Seattle, Washington, area

    USGS Publications Warehouse

    Godt, Jonathan W.; McKenna, Jonathan P.

    2008-01-01

    The temporal forecasting of landslide hazard has typically relied on empirical relations between rainfall characteristics and landslide occurrence to identify conditions that may cause shallow landslides. Here, we describe an alternate, deterministic approach to define rainfall thresholds for landslide occurrence in the Seattle, Washington, area. This approach combines an infinite slope-stability model with a variably saturated flow model to determine the rainfall intensity and duration that leads to shallow failure of hillside colluvium. We examine the influence of variation in particle-size distribution on the unsaturated hydraulic properties of the colluvium by performing capillary-rise tests on glacial outwash sand and three experimental soils with increasing amounts of fine-grained material. Observations of pore-water response to rainfall collected as part of a program to monitor the near-surface hydrology of steep coastal bluffs along Puget Sound were used to test the numerical model results and in an inverse modeling procedure to determine the in situ hydraulic properties. Modeling results are given in terms of a destabilizing rainfall intensity and duration, and comparisons with empirical observations of landslide occurrence and triggering rainfall indicate that the modeling approach may be useful for forecasting landslide occurrence.

  18. Decision tree model for predicting long-term outcomes in children with out-of-hospital cardiac arrest: a nationwide, population-based observational study

    PubMed Central

    2014-01-01

    Introduction At hospital arrival, early prognostication for children after out-of-hospital cardiac arrest (OHCA) might help clinicians formulate strategies, particularly in the emergency department. In this study, we aimed to develop a simple and generally applicable bedside tool for predicting outcomes in children after cardiac arrest. Methods We analyzed data of 5,379 children who had undergone OHCA. The data were extracted from a prospectively recorded, nationwide, Utstein-style Japanese database. The primary endpoint was survival with favorable neurological outcome (Cerebral Performance Category (CPC) scale categories 1 and 2) at 1 month after OHCA. We developed a decision tree prediction model by using data from a 2-year period (2008 to 2009, n = 3,693), and the data were validated using external data from 2010 (n = 1,686). Results Recursive partitioning analysis for 11 predictors in the development cohort indicated that the best single predictor for CPC 1 and 2 at 1 month was the prehospital return of spontaneous circulation (ROSC). The next predictor for children with prehospital ROSC was an initial shockable rhythm. For children without prehospital ROSC, the next best predictor was a witnessed arrest. Use of a simple decision tree prediction model permitted stratification into four outcome prediction groups: good (prehospital ROSC and initial shockable rhythm), moderately good (prehospital ROSC and initial nonshockable rhythm), poor (prehospital non-ROSC and witnessed arrest) and very poor (prehospital non-ROSC and unwitnessed arrest). By using this model, we identified patient groups ranging from 0.2% to 66.2% for 1-month CPC 1 and 2 probabilities. The validated decision tree prediction model demonstrated a sensitivity of 69.7% (95% confidence interval (CI) = 58.7% to 78.9%), a specificity of 95.2% (95% CI = 94.1% to 96.2%) and an area under the receiver operating characteristic curve of 0.88 (95% CI = 0.87 to 0.90) for predicting 1-month

  19. Effect of resiniferatoxin on the noxious heat threshold temperature in the rat: a novel heat allodynia model sensitive to analgesics

    PubMed Central

    Almási, Róbert; Pethö, Gábor; Bölcskei, Kata; Szolcsányi, János

    2003-01-01

    An increasing-temperature hot plate (ITHP) was introduced to measure the noxious heat threshold (45.3±0.3°C) of unrestrained rats, which was reproducible upon repeated determinations at intervals of 5 or 30 min or 1 day. Morphine, diclofenac and paracetamol caused an elevation of the noxious heat threshold following i.p. pretreatment, the minimum effective doses being 3, 10 and 200 mg kg−1, respectively. Unilateral intraplantar injection of the VR1 receptor agonist resiniferatoxin (RTX, 0.048 nmol) induced a profound drop of heat threshold to the innocuous range with a maximal effect (8–10°C drop) 5 min after RTX administration. This heat allodynia was inhibited by pretreatment with morphine, diclofenac and paracetamol, the minimum effective doses being 1, 1 and 100 mg kg−1 i.p., respectively. The long-term sensory desensitizing effect of RTX was examined by bilateral intraplantar injection (0.048 nmol per paw) which produced, after an initial threshold drop, an elevation (up to 2.9±0.5°C) of heat threshold lasting for 5 days. The VR1 receptor antagonist iodo-resiniferatoxin (I-RTX, 0.05 nmol intraplantarly) inhibited by 51% the heat threshold-lowering effect of intraplantar RTX but not α,β-methylene-ATP (0.3 μmol per paw). I-RTX (0.1 or 1 nmol per paw) failed to alter the heat threshold either acutely (5–60 min) or on the long-term (5 days). The heat threshold of VR1 receptor knockout mice was not different from that of wild-type animals (45.6±0.5 vs 45.2±0.4°C). In conclusion, the RTX-induced drop of heat threshold measured by the ITHP is a novel heat allodynia model exhibiting a high sensitivity to analgesics. PMID:12746222

  20. Construction of a prediction model for type 2 diabetes mellitus in the Japanese population based on 11 genes with strong evidence of the association.

    PubMed

    Miyake, Kazuaki; Yang, Woosung; Hara, Kazuo; Yasuda, Kazuki; Horikawa, Yukio; Osawa, Haruhiko; Furuta, Hiroto; Ng, Maggie C Y; Hirota, Yushi; Mori, Hiroyuki; Ido, Keisuke; Yamagata, Kazuya; Hinokio, Yoshinori; Oka, Yoshitomo; Iwasaki, Naoko; Iwamoto, Yasuhiko; Yamada, Yuichiro; Seino, Yutaka; Maegawa, Hiroshi; Kashiwagi, Atsunori; Wang, He-Yao; Tanahashi, Toshihito; Nakamura, Naoto; Takeda, Jun; Maeda, Eiichi; Yamamoto, Ken; Tokunaga, Katsushi; Ma, Ronald C W; So, Wing-Yee; Chan, Juliana C N; Kamatani, Naoyuki; Makino, Hideichi; Nanjo, Kishio; Kadowaki, Takashi; Kasuga, Masato

    2009-04-01

    Prediction of the disease status is one of the most important objectives of genetic studies. To select the genes with strong evidence of the association with type 2 diabetes mellitus, we validated the associations of the seven candidate loci extracted in our earlier study by genotyping the samples in two independent sample panels. However, except for KCNQ1, the association of none of the remaining seven loci was replicated. We then selected 11 genes, KCNQ1, TCF7L2, CDKAL1, CDKN2A/B, IGF2BP2, SLC30A8, HHEX, GCKR, HNF1B, KCNJ11 and PPARG, whose associations with diabetes have already been reported and replicated either in the literature or in this study in the Japanese population. As no evidence of the gene-gene interaction for any pair of the 11 loci was shown, we constructed a prediction model for the disease using the logistic regression analysis by incorporating the number of the risk alleles for the 11 genes, as well as age, sex and body mass index as independent variables. Cumulative risk assessment showed that the addition of one risk allele resulted in an average increase in the odds for the disease of 1.29 (95% CI=1.25-1.33, P=5.4 x 10(-53)). The area under the receiver operating characteristic curve, an estimate of the power of the prediction model, was 0.72, thereby indicating that our prediction model for type 2 diabetes may not be so useful but has some value. Incorporation of data from additional risk loci is most likely to increase the predictive power.

  1. Electrodynamic model of the field effect transistor application for THz/subTHz radiation detection: Subthreshold and above threshold operation

    SciTech Connect

    Dobrovolsky, V.

    2014-10-21

    Developed in this work is an electrodynamic model of field effect transistor (FET) application for THz/subTHz radiation detection. It is based on solution of the Maxwell equations in the gate dielectric, expression for current in the channel, which takes into account both the drift and diffusion current components, and the equation of current continuity. For the regimes under and above threshold at the strong inversion the response voltage, responsivity, wave impedance, power of ohmic loss in the gate and channel have been found, and the electrical noise equivalent power (ENEP) has been estimated. The responsivity is orders of magnitude higher and ENEP under threshold is orders of magnitude less than these values above threshold. Under the threshold, the electromagnetic field in the gate oxide is identical to field of the plane waves in free-space. At the same time, for strong inversion the charging of the gate capacitance through the resistance of channel determines the electric field in oxide.

  2. How patch size and refuge availability change interaction strength and population dynamics: a combined individual- and population-based modeling experiment

    PubMed Central

    Brose, Ulrich; Meyer, Katrin

    2017-01-01

    Knowledge on how functional responses (a measurement of feeding interaction strength) are affected by patch size and habitat complexity (represented by refuge availability) is crucial for understanding food-web stability and subsequently biodiversity. Due to their laborious character, it is almost impossible to carry out systematic empirical experiments on functional responses across wide gradients of patch sizes and refuge availabilities. Here we overcame this issue by using an individual-based model (IBM) to simulate feeding experiments. The model is based on empirically measured traits such as body-mass dependent speed and capture success. We simulated these experiments in patches ranging from sizes of petri dishes to natural patches in the field. Moreover, we varied the refuge availability within the patch independently of patch size, allowing for independent analyses of both variables. The maximum feeding rate (the maximum number of prey a predator can consume in a given time frame) is independent of patch size and refuge availability, as it is the physiological upper limit of feeding rates. Moreover, the results of these simulations revealed that a type III functional response, which is known to have a stabilizing effect on population dynamics, fitted the data best. The half saturation density (the prey density where a predator consumes half of its maximum feeding rate) increased with refuge availability but was only marginally influenced by patch size. Subsequently, we investigated how patch size and refuge availability influenced stability and coexistence of predator-prey systems. Following common practice, we used an allometric scaled Rosenzweig–MacArthur predator-prey model based on results from our in silico IBM experiments. The results suggested that densities of both populations are nearly constant across the range of patch sizes simulated, resulting from the constant interaction strength across the patch sizes. However, constant densities with

  3. How patch size and refuge availability change interaction strength and population dynamics: a combined individual- and population-based modeling experiment.

    PubMed

    Li, Yuanheng; Brose, Ulrich; Meyer, Katrin; Rall, Björn C

    2017-01-01

    Knowledge on how functional responses (a measurement of feeding interaction strength) are affected by patch size and habitat complexity (represented by refuge availability) is crucial for understanding food-web stability and subsequently biodiversity. Due to their laborious character, it is almost impossible to carry out systematic empirical experiments on functional responses across wide gradients of patch sizes and refuge availabilities. Here we overcame this issue by using an individual-based model (IBM) to simulate feeding experiments. The model is based on empirically measured traits such as body-mass dependent speed and capture success. We simulated these experiments in patches ranging from sizes of petri dishes to natural patches in the field. Moreover, we varied the refuge availability within the patch independently of patch size, allowing for independent analyses of both variables. The maximum feeding rate (the maximum number of prey a predator can consume in a given time frame) is independent of patch size and refuge availability, as it is the physiological upper limit of feeding rates. Moreover, the results of these simulations revealed that a type III functional response, which is known to have a stabilizing effect on population dynamics, fitted the data best. The half saturation density (the prey density where a predator consumes half of its maximum feeding rate) increased with refuge availability but was only marginally influenced by patch size. Subsequently, we investigated how patch size and refuge availability influenced stability and coexistence of predator-prey systems. Following common practice, we used an allometric scaled Rosenzweig-MacArthur predator-prey model based on results from our in silico IBM experiments. The results suggested that densities of both populations are nearly constant across the range of patch sizes simulated, resulting from the constant interaction strength across the patch sizes. However, constant densities with

  4. Modeling Habitat Split: Landscape and Life History Traits Determine Amphibian Extinction Thresholds

    PubMed Central

    Fonseca, Carlos Roberto; Coutinho, Renato M.; Azevedo, Franciane; Berbert, Juliana M.; Corso, Gilberto; Kraenkel, Roberto A.

    2013-01-01

    Habitat split is a major force behind the worldwide decline of amphibian populations, causing community change in richness and species composition. In fragmented landscapes, natural remnants, the terrestrial habitat of the adults, are frequently separated from streams, the aquatic habitat of the larvae. An important question is how this landscape configuration affects population levels and if it can drive species to extinction locally. Here, we put forward the first theoretical model on habitat split which is particularly concerned on how split distance – the distance between the two required habitats – affects population size and persistence in isolated fragments. Our diffusive model shows that habitat split alone is able to generate extinction thresholds. Fragments occurring between the aquatic habitat and a given critical split distance are expected to hold viable populations, while fragments located farther away are expected to be unoccupied. Species with higher reproductive success and higher diffusion rate of post-metamorphic youngs are expected to have farther critical split distances. Furthermore, the model indicates that negative effects of habitat split are poorly compensated by positive effects of fragment size. The habitat split model improves our understanding about spatially structured populations and has relevant implications for landscape design for conservation. It puts on a firm theoretical basis the relation between habitat split and the decline of amphibian populations. PMID:23818967

  5. Sedimentary selenium as a causal factor for adverse biological effects: Toxicity thresholds and stream modeling

    SciTech Connect

    Va Derveer, W.; Canton, S.

    1995-12-31

    Selenium (Se) in the aquatic environment exhibits a strong association with particulate organic matter and as a result, measurements of waterborne concentration can be an unreliable predictor of bioaccumulation and adverse effects. Particulate-bound Se, typically measured as sedimentary Se, has been repeatedly implicated as a causal factor for Se bioaccumulation and subsequent potential for reproductive failures in fish and/or birds at sites receiving coal-fired power plant and refinery effluents as well as irrigation drainage. In fact, the premise that adverse biological effects are largely induced by sedimentary Se satisfies all of Hill`s criteria for a causal association. Despite these findings, most efforts to control Se continue to focus on waterborne concentrations because sedimentary toxicity thresholds are largely unknown. Sedimentary Se and associated biological effects data from studies of Se-bearing industrial effluent and irrigation drainage were compiled to initiate development of biological effects thresholds, The probability of adverse effects on fish or birds appears to be low up to a sedimentary Se concentration of about 2.8 {micro}g/g dry weight and high at 6.4 {micro}g/g dry weight (10th and 50th percentile of effects data, respectively). In addition, a preliminary regression model was derived for predicting dissolved to sedimentary Se transfer in streams as an interactive function of site-specific sedimentary organic carbon content (R{sup 2} = 0,870, p < 0.001) based on irrigation drainage studies in Colorado. This dissolved Se interaction with sedimentary organic carbon provides a possible explanation for the variable biological response to waterborne Se-organic-rich sites are predisposed to greater Se bioaccumulation and subsequent biological effects than organic-poor sites.

  6. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  7. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    PubMed

    Xu, Yifang; Collins, Leslie M

    2004-04-01

    The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.

  8. A Comment on a Threshold Rule Applied to the Retrieval Decision Model. Technical Note.

    ERIC Educational Resources Information Center

    Kraft, Donald H.

    The retrieval decision problem is considered from the viewpoint of a decision theory approach. A threshold rule based on earlier rules for indexing decisions is considered and analyzed for retrieval decisions as a measure of retrieval performance. The threshold rule is seen as a good descriptive design measure of what a reasonable retrieval system…

  9. Solving Cordelia's Dilemma: Threshold Concepts within a Punctuated Model of Learning

    ERIC Educational Resources Information Center

    Kinchin, Ian M.

    2010-01-01

    The consideration of threshold concepts is offered in the context of biological education as a theoretical framework that may have utility in the teaching and learning of biology at all levels. Threshold concepts may provide a mechanism to explain the observed punctuated nature of conceptual change. This perspective raises the profile of periods…

  10. Exploration of lagged relationships between mastitis and milk yield in dairycows using a Bayesian structural equation Gaussian-threshold model

    PubMed Central

    Wu, Xiao-Lin; Heringstad, Bjørg; Gianola, Daniel

    2008-01-01

    A Gaussian-threshold model is described under the general framework of structural equation models for inferring simultaneous and recursive relationships between binary and Gaussian characters, and estimating genetic parameters. Relationships between clinical mastitis (CM) and test-day milk yield (MY) in first-lactation Norwegian Red cows were examined using a recursive Gaussian-threshold model. For comparison, the data were also analyzed using a standard Gaussian-threshold, a multivariate linear model, and a recursive multivariate linear model. The first 180 days of lactation were arbitrarily divided into three periods of equal length, in order to investigate how these relationships evolve in the course of lactation. The recursive model showed negative within-period effects from (liability to) CM to test-day MY in all three lactation periods, and positive between-period effects from test-day MY to (liability to) CM in the following period. Estimates of recursive effects and of genetic parameters were time-dependent. The results suggested unfavorable effects of production on liability to mastitis, and dynamic relationships between mastitis and test-dayMYin the course of lactation. Fitting recursive effects had little influence on the estimation of genetic parameters. However, some differences were found in the estimates of heritability, genetic, and residual correlations, using different types of models (Gaussian-threshold vs. multivariate linear). PMID:18558070

  11. Multivariate threshold model analysis of clinical mastitis in multiparous norwegian dairy cattle.

    PubMed

    Heringstad, B; Chang, Y M; Gianola, D; Klemetsdal, G

    2004-09-01

    A Bayesian multivariate threshold model was fitted to clinical mastitis (CM) records from 372,227 daughters of 2411 Norwegian Dairy Cattle (NRF) sires. All cases of veterinary-treated CM occurring from 30 d before first calving to culling or 300 d after third calving were included. Lactations were divided into 4 intervals: -30 to 0 d, 1 to 30 d, 31 to 120 d, and 121 to 300 d after calving. Within each interval, absence or presence of CM was scored as "0" or "1" based on the CM episodes. A 12-variate (3 lactations x 4 intervals) threshold model was used, assuming that CM was a different trait in each interval. Residuals were assumed correlated within lactation but independent between lactations. The model for liability to CM had interval-specific effects of month-year of calving, age at calving (first lactation), or calving interval (second and third lactations), herd-5-yr-period, sire of the cow, plus a residual. Posterior mean of heritability of liability to CM was 0.09 and 0.05 in the first and last intervals, respectively, and between 0.06 and 0.07 for other intervals. Posterior means of genetic correlations of liability to CM between intervals ranged from 0.24 (between intervals 1 and 12) to 0.73 (between intervals 1 and 2), suggesting interval-specific genetic control of resistance to mastitis. Residual correlations ranged from 0.08 to 0.17 for adjacent intervals, and between -0.01 and 0.03 for nonadjacent intervals. Trends of mean sire posterior means by birth year of daughters were used to assess genetic change. The 12 traits showed similar trends, with little or no genetic change from 1976 to 1986, and genetic improvement in resistance to mastitis thereafter. Annual genetic change was larger for intervals in first lactation when compared with second or third lactation. Within lactation, genetic change was larger for intervals early in lactation, and more so in the first lactation. This reflects that selection against mastitis in NRF has emphasized mainly CM

  12. Overcoming pain thresholds with multilevel models-an example using quantitative sensory testing (QST) data.

    PubMed

    Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris

    2015-01-01

    The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.

  13. A study of jet fuel sooting tendency using the threshold sooting index (TSI) model

    SciTech Connect

    Yang, Yi; Boehman, Andre L.; Santoro, Robert J.

    2007-04-15

    Fuel composition can have a significant effect on soot formation during gas turbine combustion. Consequently, this paper contains a comprehensive review of the relationship between fuel hydrocarbon composition and soot formation in gas turbine combustors. Two levels of correlation are identified. First, lumped fuel composition parameters such as hydrogen content and smoke point, which are conventionally used to represent fuel sooting tendency, are correlated with soot formation in practical combustors. Second, detailed fuel hydrocarbon composition is correlated with these lumped parameters. The two-level correlation makes it possible to predict soot formation in practical combustors from basic fuel composition data. Threshold sooting index (TSI), which correlates linearly with the ratio of fuel molecular weight and smoke point in a diffusion flame, is proposed as a new lumped parameter for sooting tendency correlation. It is found that the TSI model correlates excellently with hydrocarbon compositions over a wide range of fuel samples. Also, in predicting soot formation in actual combustors, the TSI model produces the best results overall in comparison with other previously reported correlating parameters, including hydrogen content, smoke point, and composite predictors containing more than one parameter. (author)

  14. T Lymphocyte Activation Threshold and Membrane Reorganization Perturbations in Unique Culture Model

    NASA Technical Reports Server (NTRS)

    Adams, C. L.; Sams, C. F.

    2000-01-01

    Quantitative activation thresholds and cellular membrane reorganization are mechanisms by which resting T cells modulate their response to activating stimuli. Here we demonstrate perturbations of these cellular processes in a unique culture system that non-invasively inhibits T lymphocyte activation. During clinorotation, the T cell activation threshold is increased 5-fold. This increased threshold involves a mechanism independent of TCR triggering. Recruitment of lipid rafts to the activation site is impaired during clinorotation but does occur with increased stimulation. This study describes a situation in which an individual cell senses a change in its physical environment and alters its cell biological behavior.

  15. Coherence thresholds in models of language change and evolution: The effects of noise, dynamics, and network of interactions

    NASA Astrophysics Data System (ADS)

    Tavares, J. M.; Telo da Gama, M. M.; Nunes, A.

    2008-04-01

    A simple model of language evolution proposed by Komarova, Niyogi, and Nowak is characterized by a payoff in communicative function and by an error in learning that measure the accuracy in language acquisition. The time scale for language change is generational, and the model’s equations in the mean-field approximation are a particular case of the replicator-mutator equations of evolutionary dynamics. In well-mixed populations, this model exhibits a critical coherence threshold; i.e., a minimal accuracy in the learning process is required to maintain linguistic coherence. In this work, we analyze in detail the effects of different fitness-based dynamics driving linguistic coherence and of the network of interactions on the nature of the coherence threshold by performing numerical simulations and theoretical analyses of three different models of language change in finite populations with two types of structure: fully connected networks and regular random graphs. We find that although the threshold of the original replicator-mutator evolutionary model is robust with respect to the structure of the network of contacts, the coherence threshold of related fitness-driven models may be strongly affected by this feature.

  16. Evaluation of landslide reactivation: A modified rainfall threshold model based on historical records of rainfall and landslides

    NASA Astrophysics Data System (ADS)

    Floris, Mario; Bozzano, Francesca

    2008-02-01

    This study proposes a modification of the conventional threshold model for assessing the probability of rainfall-induced landslide reactivation. The modification is based on the consideration that exceedance of a pre-determined rainfall threshold is a necessary but not sufficient condition to reactivate a landslide. The proposed method calculates the probability of reactivation as a function of the probability of exceedance of a pre-determined rainfall threshold, as well as the probability of occurrence of a landslide after such exceedance. The data for the calculation were obtained from historical records of landslides and rainfall. The method was applied to two complex landslides ("San Donato" and "La Salsa") involving fine-grained debris in the southern section of the Apennine foredeep. The minimum rainfall threshold triggering landslide reactivation on the two slopes was determined by examining rainfall patterns during the 180 days preceding the slide events. For the San Donato and La Salsa landslides, the minimum triggering threshold consists of rainfall events lasting 15 days, with cumulated rainfall exceeding 150 and 180 mm, respectively. Based on hydrological and statistical analyses, the annual probabilities of exceeding the thresholds were estimated to be 0.38 and 0.25, respectively. During the period from 1950 to 1987, the minimum threshold was exceeded 14 times, and four reactivations occurred at San Donato; whereas, the threshold was exceeded 10 times and three reactivations occurred at La Salsa. Hence, the probabilities of landsliding after exceedance of the minimum rainfall threshold are 4/14 and 3/10, respectively. Finally, annual reactivation probabilities were calculated to be 0.11 and 0.08, respectively. The reliability of the minimum rainfall threshold was tested by: i) simulating variations in the stress-strain behavior of the slopes as a result of fluctuations in the water table from normal to extreme values; and ii) analyzing the results of

  17. Improving Landslide Susceptibility Modeling Using an Empirical Threshold Scheme for Excluding Landslide Deposition

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Lai, J. S.; Chiang, S. H.

    2015-12-01

    Landslides are frequently triggered by typhoons and earthquakes in Taiwan, causing serious economic losses and human casualties. Remotely sensed images and geo-spatial data consisting of land-cover and environmental information have been widely used for producing landslide inventories and causative factors for slope stability analysis. Landslide susceptibility, on the other hand, can represent the spatial likelihood of landslide occurrence and is an important basis for landslide risk assessment. As multi-temporal satellite images become popular and affordable, they are commonly used to generate landslide inventories for subsequent analysis. However, it is usually difficult to distinguish different landslide sub-regions (scarp, debris flow, deposition etc.) directly from remote sensing imagery. Consequently, the extracted landslide extents using image-based visual interpretation and automatic detections may contain many depositions that may reduce the fidelity of the landslide susceptibility model. This study developed an empirical thresholding scheme based on terrain characteristics for eliminating depositions from detected landslide areas to improve landslide susceptibility modeling. In this study, Bayesian network classifier is utilized to build a landslide susceptibility model and to predict sequent rainfall-induced shallow landslides in the Shimen reservoir watershed located in northern Taiwan. Eleven causative factors are considered, including terrain slope, aspect, curvature, elevation, geology, land-use, NDVI, soil, distance to fault, river and road. Landslide areas detected using satellite images acquired before and after eight typhoons between 2004 to 2008 are collected as the main inventory for training and verification. In the analysis, previous landslide events are used as training data to predict the samples of the next event. The results are then compared with recorded landslide areas in the inventory to evaluate the accuracy. Experimental results

  18. [Automatic detection of exudates in retinal images based on threshold moving average models].

    PubMed

    Wisaeng, K; Hiransakolwong, N; Pothiruk, E

    2015-01-01

    Since exudate diagnostic procedures require the attention of an expert ophthalmologist as well as regular monitoring of the disease, the workload of expert ophthalmologists will eventually exceed the current screening capabilities. Retinal imaging technology is a current practice screening capability providing a great potential solution. In this paper, a fast and robust automatic detection of exudates based on moving average histogram models of the fuzzy image was applied, and then the better histogram was derived. After segmentation of the exudate candidates, the true exudates were pruned based on Sobel edge detector and automatic Otsu's thresholding algorithm that resulted in the accurate location of the exudates in digital retinal images. To compare the performance of exudate detection methods we have constructed a large database of digital retinal images. The method was trained on a set of 200 retinal images, and tested on a completely independent set of 1220 retinal images. Results show that the exudate detection method performs overall best sensitivity, specificity, and accuracy of 90.42%, 94.60%, and 93.69%, respectively.

  19. Electro-Thermal Model of Threshold Switching in TaOx-Based Devices.

    PubMed

    Goodwill, Jonathan M; Sharma, Abhishek A; Li, Dasheng; Bain, James A; Skowronski, Marek

    2017-04-05

    Pulsed and quasi-static current-voltage (I-V) characteristics of threshold switching in TiN/TaOx/TiN crossbar devices were measured as a function of stage temperature (200-495 K) and oxygen flow during the deposition of TaOx. A comparison of the pulsed and quasi-static characteristics in the high resistance part of the I-V revealed that Joule self-heating significantly affected the current and was a likely source of negative differential resistance (NDR) and thermal runaway. The experimental quasi-static I-V's were simulated using a finite element electro-thermal model that coupled current and heat flow and incorporated an external circuit with an appropriate load resistor. The simulation reproduced the experimental I-V including the OFF-state at low currents and the volatile NDR region. In the NDR region, the simulation predicted spontaneous current constriction forming a small-diameter hot conducting filament with a radius of 250 nm in a 6 μm diameter device.

  20. High-precision percolation thresholds and Potts-model critical manifolds from graph polynomials

    NASA Astrophysics Data System (ADS)

    >Jesper Lykke Jacobsen,

    2014-04-01

    The critical curves of the q-state Potts model can be determined exactly for regular two-dimensional lattices G that are of the three-terminal type. This comprises the square, triangular, hexagonal and bow-tie lattices. Jacobsen and Scullard have defined a graph polynomial PB(q, v) that gives access to the critical manifold for general lattices. It depends on a finite repeating part of the lattice, called the basis B, and its real roots in the temperature variable v = eK - 1 provide increasingly accurate approximations to the critical manifolds upon increasing the size of B. Using transfer matrix techniques, these authors computed PB(q, v) for large bases (up to 243 edges), obtaining determinations of the ferromagnetic critical point vc > 0 for the (4, 82), kagome, and (3, 122) lattices to a precision (of the order 10-8) slightly superior to that of the best available Monte Carlo simulations. In this paper we describe a more efficient transfer matrix approach to the computation of PB(q, v) that relies on a formulation within the periodic Temperley-Lieb algebra. This makes possible computations for substantially larger bases (up to 882 edges), and the precision on vc is hence taken to the range 10-13. We further show that a large variety of regular lattices can be cast in a form suitable for this approach. This includes all Archimedean lattices, their duals and their medials. For all these lattices we tabulate high-precision estimates of the bond percolation thresholds pc and Potts critical points vc. We also trace and discuss the full Potts critical manifold in the (q, v) plane, paying special attention to the antiferromagnetic region v < 0. Finally, we adapt the technique to site percolation as well, and compute the polynomials PB(p) for certain Archimedean and dual lattices (those having only cubic and quartic vertices), using very large bases (up to 243 vertices). This produces the site percolation thresholds pc to a precision of the order of 10-9.

  1. A population-based Habitable Zone perspective

    NASA Astrophysics Data System (ADS)

    Zsom, Andras

    2015-08-01

    What can we tell about exoplanet habitability if currently only the stellar properties, planet radius, and the incoming stellar flux are known? The Habitable Zone (HZ) is the region around stars where planets can harbor liquid water on their surfaces. The HZ is traditionally conceived as a sharp region around the star because it is calculated for one planet with specific properties e.g., Earth-like or desert planets , or rocky planets with H2 atmospheres. Such planet-specific approach is limiting because the planets’ atmospheric and geophysical properties, which influence the surface climate and the presence of liquid water, are currently unknown but expected to be diverse.A statistical HZ description is outlined which does not select one specific planet type. Instead the atmospheric and surface properties of exoplanets are treated as random variables and a continuous range of planet scenarios are considered. Various probability density functions are assigned to each observationally unconstrained random variable, and a combination of Monte Carlo sampling and climate modeling is used to generate synthetic exoplanet populations with known surface climates. Then, the properties of the liquid water bearing subpopulation is analyzed.Given our current observational knowledge of small exoplanets, the HZ takes the form of a weakly-constrained but smooth probability function. The model shows that the HZ has an inner edge: it is unlikely that planets receiving two-three times more stellar radiation than Earth can harbor liquid water. But a clear outer edge is not seen: a planet that receives a fraction of Earth's stellar radiation (1-10%) can be habitable, if the greenhouse effect of the atmosphere is strong enough. The main benefit of the population-based approach is that it will be refined over time as new data on exoplanets and their atmospheres become available.

  2. Transfer model of lead in soil-carrot (Daucus carota L.) system and food safety thresholds in soil.

    PubMed

    Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang

    2015-09-01

    Reliable empirical models describing lead (Pb) transfer in soil-plant systems are needed to improve soil environmental quality standards. A greenhouse experiment was conducted to develop soil-plant transfer models to predict Pb concentrations in carrot (Daucus carota L.). Soil thresholds for food safety were then derived inversely using the prediction model in view of the maximum allowable limit for Pb in food. The 2 most important soil properties that influenced carrot Pb uptake factor (ratio of Pb concentration in carrot to that in soil) were soil pH and cation exchange capacity (CEC), as revealed by path analysis. Stepwise multiple linear regression models were based on soil properties and the pseudo total (aqua regia) or extractable (0.01 M CaCl2 and 0.005 M diethylenetriamine pentaacetic acid) soil Pb concentrations. Carrot Pb contents were best explained by the pseudo total soil Pb concentrations in combination with soil pH and CEC, with the percentage of variation explained being up to 93%. The derived soil thresholds based on added Pb (total soil Pb with the geogenic background part subtracted) have the advantage of better applicability to soils with high natural background Pb levels. Validation of the thresholds against data from field trials and literature studies indicated that the proposed thresholds are reasonable and reliable.

  3. Use of a threshold animal model to estimate calving ease and stillbirth (co)variance components for US Holsteins

    Technology Transfer Automated Retrieval System (TEKTRAN)

    (Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...

  4. SEMICONDUCTOR DEVICES: Modeling and discussion of threshold voltage for a multi-floating gate FET pH sensor

    NASA Astrophysics Data System (ADS)

    Zhaoxia, Shi; Dazhong, Zhu

    2009-11-01

    Research into new pH sensors fabricated by the standard CMOS process is currently a hot topic. The new pH sensing multi-floating gate field effect transistor is found to have a very large threshold voltage, which is different from the normal ion-sensitive field effect transistor. After analyzing all the interface layers of the structure, a new sensitive model based on the Gauss theorem and the charge neutrality principle is created in this paper. According to the model, the charge trapped on the multi-floating gate during the process and the thickness of the sensitive layer are the main causes of the large threshold voltage. From this model, it is also found that removing the charge on the multi-floating gate is an effective way to decrease the threshold voltage. The test results for three different standard pH buffer solutions show the correctness of the model and point the way to solve the large threshold problem.

  5. Noisy threshold in neuronal models: connections with the noisy leaky integrate-and-fire model.

    PubMed

    Dumont, G; Henry, J; Tarniceriu, C O

    2016-12-01

    Providing an analytical treatment to the stochastic feature of neurons' dynamics is one of the current biggest challenges in mathematical biology. The noisy leaky integrate-and-fire model and its associated Fokker-Planck equation are probably the most popular way to deal with neural variability. Another well-known formalism is the escape-rate model: a model giving the probability that a neuron fires at a certain time knowing the time elapsed since its last action potential. This model leads to a so-called age-structured system, a partial differential equation with non-local boundary condition famous in the field of population dynamics, where the age of a neuron is the amount of time passed by since its previous spike. In this theoretical paper, we investigate the mathematical connection between the two formalisms. We shall derive an integral transform of the solution to the age-structured model into the solution of the Fokker-Planck equation. This integral transform highlights the link between the two stochastic processes. As far as we know, an explicit mathematical correspondence between the two solutions has not been introduced until now.

  6. Predicting Bed Grain Size in Threshold Channels Using Lidar Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Snyder, N. P.; Nesheim, A. O.; Wilkins, B. C.; Edmonds, D. A.

    2011-12-01

    Over the past 20 years, researchers have developed GIS-based algorithms to extract channel networks and measure longitudinal profiles from digital elevation models (DEMs), and have used these to study stream morphology in relation to tectonics, climate and ecology. The accuracy of stream elevations from traditional DEMs (10-50 m pixels) is typically limited by the contour interval (3-20 m) of the rasterized topographic map source. This is a particularly severe limitation in low-relief watersheds, where 3 m of channel elevation change may occur over several km. Lidar DEMs (~1 m pixels) allow researchers to resolve channel elevation changes of ~0.5 m, enabling reach-scale calculations of gradient, which is the most important parameter for understanding channel processes at that scale. Lidar DEMs have the additional advantage of allowing users to make estimates of channel width. We present a process-based model that predicts median bed grain size in threshold gravel-bed channels from lidar slope and width measurements using the Shields and Manning equations. We compare these predictions to field grain size measurements in segments of three Maine rivers. Like many paraglacial rivers, these have longitudinal profiles characterized by relatively steep (gradient >0.002) and flat (gradient <0.0005) segments, with length scales of several km. This heterogeneity corresponds to strong variations in channel form, sediment supply, bed grain size, and aquatic habitat characteristics. The model correctly predicts bed sediment size within a factor of two in ~70% of the study sites. The model works best in single-thread channels with relatively low sediment supply, and poorly in depositional, multi-thread and/or fine (median grain size <20 mm) reaches. We evaluate the river morphology (using field and lidar measurements) in the context of the Parker et al. (2007) hydraulic geometry relations for single-thread gravel-bed rivers, and find correspondence in the locations where both

  7. Modelling single shot damage thresholds of multilayer optics for high-intensity short-wavelength radiation sources.

    PubMed

    Loch, R A; Sobierajski, R; Louis, E; Bosgra, J; Bijkerk, F

    2012-12-17

    The single shot damage thresholds of multilayer optics for high-intensity short-wavelength radiation sources are theoretically investigated, using a model developed on the basis of experimental data obtained at the FLASH and LCLS free electron lasers. We compare the radiation hardness of commonly used multilayer optics and propose new material combinations selected for a high damage threshold. Our study demonstrates that the damage thresholds of multilayer optics can vary over a large range of incidence fluences and can be as high as several hundreds of mJ/cm(2). This strongly suggests that multilayer mirrors are serious candidates for damage resistant optics. Especially, multilayer optics based on Li(2)O spacers are very promising for use in current and future short-wavelength radiation sources.

  8. Irreversible mean-field model of the critical behavior of charge-density waves below the threshold for sliding

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    1993-05-01

    A mean-field (MF) model of the critical behavior of charge-density waves below the threshold for sliding is proposed, which replaces the combined effect of the pinning force and of the forces exerted by the neighbors on a given particle n by an effective force threshold Xn. It allows one to rationalize the numerical results of Middleton and Fisher [Phys. Rev. Lett. 66 (1991) 92] on the divergence of the polarization and of the largest correlation length and of Pla and Nori [Phys. Rev. Lett. 67 (1991) 919] on the distribution D( d) of sliding bursts of size d, measured in narrow intervals of driving fields E at a finite distance below the threshold Ec.

  9. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    PubMed

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice.

  10. Evaluation of the pentylenetetrazole seizure threshold test in epileptic mice as surrogate model for drug testing against pharmacoresistant seizures.

    PubMed

    Töllner, Kathrin; Twele, Friederike; Löscher, Wolfgang

    2016-04-01

    Resistance to antiepileptic drugs (AEDs) is a major problem in epilepsy therapy, so that development of more effective AEDs is an unmet clinical need. Several rat and mouse models of epilepsy with spontaneous difficult-to-treat seizures exist, but because testing of antiseizure drug efficacy is extremely laborious in such models, they are only rarely used in the development of novel AEDs. Recently, the use of acute seizure tests in epileptic rats or mice has been proposed as a novel strategy for evaluating novel AEDs for increased antiseizure efficacy. In the present study, we compared the effects of five AEDs (valproate, phenobarbital, diazepam, lamotrigine, levetiracetam) on the pentylenetetrazole (PTZ) seizure threshold in mice that were made epileptic by pilocarpine. Experiments were started 6 weeks after a pilocarpine-induced status epilepticus. At this time, control seizure threshold was significantly lower in epileptic than in nonepileptic animals. Unexpectedly, only one AED (valproate) was less effective to increase seizure threshold in epileptic vs. nonepileptic mice, and this difference was restricted to doses of 200 and 300 mg/kg, whereas the difference disappeared at 400mg/kg. All other AEDs exerted similar seizure threshold increases in epileptic and nonepileptic mice. Thus, induction of acute seizures with PTZ in mice pretreated with pilocarpine does not provide an effective and valuable surrogate method to screen drugs for antiseizure efficacy in a model of difficult-to-treat chronic epilepsy as previously suggested from experiments with this approach in rats.

  11. Implementation and assessment of the mechanical-threshold-stress model using the EPIC2 and PINON computer codes

    SciTech Connect

    Maudlin, P.J.; Davidson, R.F.; Henninger, R.J.

    1990-09-01

    A flow-stress constitutive model based on dislocation mechanics has been implemented in the EPIC2 and PINON continuum mechanics modes. This model provides a better understanding of the plastic deformation process for ductile materials by using an internal state variable called the mechanical threshold stress. This kinematic quantity tracks the evolution of the material's microstructure along some arbitrary strain, strain-rate, and temperature-dependent path using a differential form that balances dislocation generation and recovery processes. Given a value for the mechanical threshold stress, the flow stress is determined using either a thermal-activation-controlled or a drag-controlled kinetics relationship. We evaluated the performance of the Mechanical Threshold Stress (MTS) model in terms of accuracy and computational resources through a series of assessment problems chosen to exercise the model over a large range of strain rates and strains. Our calculations indicate that the more complicated MTS model is reasonable in terms of computational resources when compared with other models in common hydrocode use. In terms of accuracy, these simulations show that the MTS model is superior for problems containing mostly normal strain with shear strains less than 0.2 but perhaps not as accurate for problems that contain large amounts of shear strain. 29 refs., 33 figs., 9 tabs.

  12. Adaptive optics for reduced threshold energy in femtosecond laser induced optical breakdown in water based eye model

    NASA Astrophysics Data System (ADS)

    Hansen, Anja; Krueger, Alexander; Ripken, Tammo

    2013-03-01

    In ophthalmic microsurgery tissue dissection is achieved using femtosecond laser pulses to create an optical breakdown. For vitreo-retinal applications the irradiance distribution in the focal volume is distorted by the anterior components of the eye causing a raised threshold energy for breakdown. In this work, an adaptive optics system enables spatial beam shaping for compensation of aberrations and investigation of wave front influence on optical breakdown. An eye model was designed to allow for aberration correction as well as detection of optical breakdown. The eye model consists of an achromatic lens for modeling the eye's refractive power, a water chamber for modeling the tissue properties, and a PTFE sample for modeling the retina's scattering properties. Aberration correction was performed using a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated using transmission measurements for determination of the breakdown threshold and video imaging of the focal region for study of the gas bubble dynamics. The threshold energy is considerably reduced when correcting for the aberrations of the system and the model eye. Also, a raise in irradiance at constant pulse energy was shown for the aberration corrected case. The reduced pulse energy lowers the potential risk of collateral damage which is especially important for retinal safety. This offers new possibilities for vitreo-retinal surgery using femtosecond laser pulses.

  13. Discrimination thresholds of normal and anomalous trichromats: Model of senescent changes in ocular media density on the Cambridge Colour Test

    PubMed Central

    Shinomori, Keizo; Panorgias, Athanasios; Werner, John S.

    2017-01-01

    Age-related changes in chromatic discrimination along dichromatic confusion lines were measured with the Cambridge Colour Test (CCT). One hundred and sixty-two individuals (16 to 88 years old) with normal Rayleigh matches were the major focus of this paper. An additional 32 anomalous trichromats classified by their Rayleigh matches were also tested. All subjects were screened to rule out abnormalities of the anterior and posterior segments. Thresholds on all three chromatic vectors measured with the CCT showed age-related increases. Protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bilinear model. Analysis and modeling demonstrated that the nominal vectors of the CCT are shifted by senescent changes in ocular media density, and a method for correcting the CCT vectors is demonstrated. A correction for these shifts indicates that classification among individuals of different ages is unaffected. New vector thresholds for elderly observers and for all age groups are suggested based on calculated tolerance limits. PMID:26974943

  14. Discrimination thresholds of normal and anomalous trichromats: Model of senescent changes in ocular media density on the Cambridge Colour Test.

    PubMed

    Shinomori, Keizo; Panorgias, Athanasios; Werner, John S

    2016-03-01

    Age-related changes in chromatic discrimination along dichromatic confusion lines were measured with the Cambridge Colour Test (CCT). One hundred and sixty-two individuals (16 to 88 years old) with normal Rayleigh matches were the major focus of this paper. An additional 32 anomalous trichromats classified by their Rayleigh matches were also tested. All subjects were screened to rule out abnormalities of the anterior and posterior segments. Thresholds on all three chromatic vectors measured with the CCT showed age-related increases. Protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bilinear model. Analysis and modeling demonstrated that the nominal vectors of the CCT are shifted by senescent changes in ocular media density, and a method for correcting the CCT vectors is demonstrated. A correction for these shifts indicates that classification among individuals of different ages is unaffected. New vector thresholds for elderly observers and for all age groups are suggested based on calculated tolerance limits.

  15. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  16. PT -breaking threshold in spatially asymmetric Aubry-André and Harper models: Hidden symmetry and topological states

    NASA Astrophysics Data System (ADS)

    Harter, Andrew K.; Lee, Tony E.; Joglekar, Yogesh N.

    2016-06-01

    Aubry-André-Harper lattice models, characterized by a reflection-asymmetric sinusoidally varying nearest-neighbor tunneling profile, are well known for their topological properties. We consider the fate of such models in the presence of balanced gain and loss potentials ±i γ located at reflection-symmetric sites. We predict that these models have a finite PT -breaking threshold only for specific locations of the gain-loss potential and uncover a hidden symmetry that is instrumental to the finite threshold strength. We also show that the topological edge states remain robust in the PT -symmetry-broken phase. Our predictions substantially broaden the possible experimental realizations of a PT -symmetric system.

  17. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    SciTech Connect

    Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  18. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    SciTech Connect

    Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  19. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    DOE PAGES

    Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique valuemore » but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.« less

  20. Heritability of Autism Spectrum Disorder in a UK Population-Based Twin Sample

    PubMed Central

    Colvert, Emma; Tick, Beata; McEwen, Fiona; Stewart, Catherine; Curran, Sarah R.; Woodhouse, Emma; Gillan, Nicola; Hallett, Victoria; Lietz, Stephanie; Garnett, Tracy; Ronald, Angelica; Plomin, Robert; Rijsdijk, Frühling; Happé, Francesca; Bolton, Patrick

    2016-01-01

    IMPORTANCE Most evidence to date highlights the importance of genetic influences on the liability to autism and related traits. However, most of these findings are derived from clinically ascertained samples, possibly missing individuals with subtler manifestations, and obtained estimates may not be representative of the population. OBJECTIVES To establish the relative contributions of genetic and environmental factors in liability to autism spectrum disorder (ASD) and a broader autism phenotype in a large population-based twin sample and to ascertain the genetic/environmental relationship between dimensional trait measures and categorical diagnostic constructs of ASD. DESIGN, SETTING, AND PARTICIPANTS We used data from the population-based cohort Twins Early Development Study, which included all twin pairs born in England and Wales from January 1, 1994, through December 31, 1996. We performed joint continuous-ordinal liability threshold model fitting using the full information maximum likelihood method to estimate genetic and environmental parameters of covariance. Twin pairs underwent the following assessments: the Childhood Autism Spectrum Test (CAST) (6423 pairs; mean age, 7.9 years), the Development and Well-being Assessment (DAWBA) (359 pairs; mean age, 10.3 years), the Autism Diagnostic Observation Schedule (ADOS) (203 pairs; mean age, 13.2 years), the Autism Diagnostic Interview–Revised (ADI-R) (205 pairs; mean age, 13.2 years), and a best-estimate diagnosis (207 pairs). MAIN OUTCOMES AND MEASURES Participants underwent screening using a population-based measure of autistic traits (CAST assessment), structured diagnostic assessments (DAWBA, ADI-R, and ADOS), and a best-estimate diagnosis. RESULTS On all ASD measures, correlations among monozygotic twins (range, 0.77-0.99) were significantly higher than those for dizygotic twins (range, 0.22-0.65), giving heritability estimates of 56% to 95%. The covariance of CAST and ASD diagnostic status (DAWBA, ADOS

  1. Single-Event Upset (SEU) model verification and threshold determination using heavy ions in a bipolar static RAM

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.

    1985-01-01

    Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.

  2. A modelling study of locomotion-induced hyperpolarization of voltage threshold in cat lumbar motoneurones

    PubMed Central

    Dai, Yue; Jones, Kelvin E; Fedirchuk, Brent; McCrea, David A; Jordan, Larry M

    2002-01-01

    During fictive locomotion the excitability of adult cat lumbar motoneurones is increased by a reduction (a mean hyperpolarization of ≈6.0 mV) of voltage threshold (Vth) for action potential (AP) initiation that is accompanied by only small changes in AP height and width. Further examination of the experimental data in the present study confirms that Vth lowering is present to a similar degree in both the hyperpolarized and depolarized portions of the locomotor step cycle. This indicates that Vth reduction is a modulation of motoneurone membrane currents throughout the locomotor state rather than being related to the phasic synaptic input within the locomotor cycle. Potential ionic mechanisms of this locomotor-state-dependent increase in excitability were examined using three five-compartment models of the motoneurone innervating slow, fast fatigue resistant and fast fatigable muscle fibres. Passive and active membrane conductances were set to produce input resistance, rheobase, afterhyperpolarization (AHP) and membrane time constant values similar to those measured in adult cat motoneurones in non-locomoting conditions. The parameters of 10 membrane conductances were then individually altered in an attempt to replicate the hyperpolarization of Vth that occurs in decerebrate cats during fictive locomotion. The goal was to find conductance changes that could produce a greater than 3 mV hyperpolarization of Vth with only small changes in AP height (< 3 mV) and width (< 1.2 ms). Vth reduction without large changes in AP shape could be produced either by increasing fast sodium current or by reducing delayed rectifier potassium current. The most effective Vth reductions were achieved by either increasing the conductance of fast sodium channels or by hyperpolarizing the voltage dependency of their activation. These changes were particularly effective when localized to the initial segment. Reducing the conductance of delayed rectifier channels or depolarizing their

  3. Analytical threshold voltage modeling of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Balraj; Kumar, Sanjay; Singh, Kunal; Jit, Satyabrata

    2017-04-01

    Two dimensional threshold voltage model of ion-implanted strained-Si double-material double-gate MOSFETs has been done based on the solution of two dimensional Poisson's equation in the channel region using the parabolic approximation method. Novelty of the proposed device structure lies in the amalgamation of the advantages of both the strained-Si channel and double-material double-gate structure with a vertical Gaussian-like doping profile. The effects of different device parameters (such as device channel length, gate length ratios, germanium mole fraction) and doping parameters (such as projected range, straggle parameter) on threshold voltage of the proposed structure have been investigated. It is observed that the subthreshold performance of the device can be improved by simply controlling the doping parameters while maintaining other device parameters constant. The modeling results show a good agreement with the numerical simulation data obtained by using ATLAS™, a 2D device simulator from SILVACO.

  4. Analytical threshold voltage modeling of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Balraj; Kumar, Sanjay; Singh, Kunal; Jit, Satyabrata

    2016-09-01

    Two dimensional threshold voltage model of ion-implanted strained-Si double-material double-gate MOSFETs has been done based on the solution of two dimensional Poisson's equation in the channel region using the parabolic approximation method. Novelty of the proposed device structure lies in the amalgamation of the advantages of both the strained-Si channel and double-material double-gate structure with a vertical Gaussian-like doping profile. The effects of different device parameters (such as device channel length, gate length ratios, germanium mole fraction) and doping parameters (such as projected range, straggle parameter) on threshold voltage of the proposed structure have been investigated. It is observed that the subthreshold performance of the device can be improved by simply controlling the doping parameters while maintaining other device parameters constant. The modeling results show a good agreement with the numerical simulation data obtained by using ATLAS™, a 2D device simulator from SILVACO.

  5. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception

    PubMed Central

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term “cross-matching,” represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth

  6. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception.

    PubMed

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term "cross-matching," represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth perception.

  7. Combining physiological threshold knowledge to species distribution models is key to improving forecasts of the future niche for macroalgae.

    PubMed

    Martínez, Brezo; Arenas, Francisco; Trilla, Alba; Viejo, Rosa M; Carreño, Francisco

    2015-04-01

    Species distribution models (SDM) are a useful tool for predicting species range shifts in response to global warming. However, they do not explore the mechanisms underlying biological processes, making it difficult to predict shifts outside the environmental gradient where the model was trained. In this study, we combine correlative SDMs and knowledge on physiological limits to provide more robust predictions. The thermal thresholds obtained in growth and survival experiments were used as proxies of the fundamental niches of two foundational marine macrophytes. The geographic projections of these species' distributions obtained using these thresholds and existing SDMs were similar in areas where the species are either absent-rare or frequent and where their potential and realized niches match, reaching consensus predictions. The cold-temperate foundational seaweed Himanthalia elongata was predicted to become extinct at its southern limit in northern Spain in response to global warming, whereas the occupancy of southern-lusitanic Bifurcaria bifurcata was expected to increase. Combined approaches such as this one may also highlight geographic areas where models disagree potentially due to biotic factors. Physiological thresholds alone tended to over-predict species prevalence, as they cannot identify absences in climatic conditions within the species' range of physiological tolerance or at the optima. Although SDMs tended to have higher sensitivity than threshold models, they may include regressions that do not reflect causal mechanisms, constraining their predictive power. We present a simple example of how combining correlative and mechanistic knowledge provides a rapid way to gain insight into a species' niche resulting in consistent predictions and highlighting potential sources of uncertainty in forecasted responses to climate change.

  8. Revisiting the Economic Injury Level and Economic Threshold Model for Potato Leafhopper (Hemiptera: Cicadellidae) in Alfalfa.

    PubMed

    Chasen, Elissa M; Undersander, Dan J; Cullen, Eileen M

    2015-08-01

    The economic injury level for potato leafhopper, Empoasca fabae (Harris), in alfalfa (Medicago sativa L.) was developed over 30 yr ago. In response to increasing market value of alfalfa, farmers and consultants are interested in reducing the economic threshold for potato leafhopper in alfalfa. To address this question, caged field trials were established on two consecutive potato leafhopper susceptible crops in 2013. Field cages were infested with a range of potato leafhopper densities to create a linear regression of alfalfa yield response. The slopes, or yield loss per insect, for the linear regressions of both trials were used to calculate an economic injury level for a range of current alfalfa market values and control costs. This yield-loss relationship is the first quantification that could be used to help assess whether the economic threshold should be lowered, given the increased market value of alfalfa.

  9. Holes in the Bathtub: Water Table Dependent Services and Threshold Behavior in an Economic Model of Groundwater Extraction

    NASA Astrophysics Data System (ADS)

    Kirk-lawlor, N. E.; Edwards, E. C.

    2012-12-01

    In many groundwater systems, the height of the water table must be above certain thresholds for some types of surface flow to exist. Examples of flows that depend on water table elevation include groundwater baseflow to river systems, groundwater flow to wetland systems, and flow to springs. Meeting many of the goals of sustainable water resource management requires maintaining these flows at certain rates. Water resource management decisions invariably involve weighing tradeoffs between different possible usage regimes and the economic consequences of potential management choices are an important factor in these tradeoffs. Policies based on sustainability may have a social cost from forgoing present income. This loss of income may be worth bearing, but should be well understood and carefully considered. Traditionally, the economic theory of groundwater exploitation has relied on the assumption of a single-cell or "bathtub" aquifer model, which offers a simple means to examine complex interactions between water user and hydrologic system behavior. However, such a model assumes a closed system and does not allow for the simulation of groundwater outflows that depend on water table elevation (e.g. baseflow, springs, wetlands), even though those outflows have value. We modify the traditional single-cell aquifer model by allowing for outflows when the water table is above certain threshold elevations. These thresholds behave similarly to holes in a bathtub, where the outflow is a positive function of the height of the water table above the threshold and the outflow is lost when the water table drops below the threshold. We find important economic consequences to this representation of the groundwater system. The economic value of services provided by threshold-dependent outflows (including non-market value), such as ecosystem services, can be incorporated. The value of services provided by these flows may warrant maintaining the water table at higher levels than would

  10. Model-Independent Determination of the Compositeness of Near-Threshold Quasibound States

    NASA Astrophysics Data System (ADS)

    Kamiya, Yuki; Hyodo, Tetsuo

    We study the compositeness of near-threshold states to clarify the internal structure of exotic hadron candidates. Within the framework of effective field theory, we extend the Weinberg's weak-binding relation to include the nearby CDD (Castillejo-Dalitz-Dyson) pole contribution with the help of the Padé approximant. Finally, using the extended relation, we conclude that the CDD pole contribution to the Λ(1405) baryon in the bar{K}N amplitude is negligible.

  11. Changing the Risk Paradigms Can be Good for Our Health: J-Shaped, Linear and Threshold Dose-Response Models.

    PubMed

    Ricci, P F; Straja, S R; Cox, A L

    2012-01-01

    Both the linear (at low doses)-no-threshold (LNT) and the threshold models (S-shapes) dose-response lead to no benefit from low exposure. We propose three new models that allow and include, but do not require - unlike LNT and S-shaped models - this strong assumption. We also provide the means to calculate benefits associated with bi-phasic biological behaviors, when they occur and propose:THREE HORMETIC (PHASIC) MODELS: the J-shaped, inverse J-shaped, the min-max, andMethod for calculating the direct benefits associated with the J and inverse J-shaped models.The J-shaped and min-max models for mutagens and carcinogenic agents include an experimentally justified repair stage for toxic and carcinogenic damage. We link these to stochastic transition models for cancer and show how abrupt transitions in cancer hazard rates, as functions of exposure concentrations and durations, can emerge naturally in large cell populations even when the rates of cell-level events increase smoothly (e.g., proportionally) with concentration. In this very general family of models, J-shaped dose-response curves emerge. These results are universal, i.e., independent of specific biological details represented by the stochastic transition networks. Thus, using them suggests a more complete and realistic way to assess risks at low doses or dose-rates.

  12. Application of physiologically-based toxicokinetic modelling in oral-to-dermal extrapolation of threshold doses of cosmetic ingredients.

    PubMed

    Gajewska, M; Worth, A; Urani, C; Briesen, H; Schramm, K-W

    2014-06-16

    The application of physiologically based toxicokinetic (PBTK) modelling in route-to-route (RtR) extrapolation of three cosmetic ingredients: coumarin, hydroquinone and caffeine is shown in this study. In particular, the oral no-observed-adverse-effect-level (NOAEL) doses of these chemicals are extrapolated to their corresponding dermal values by comparing the internal concentrations resulting from oral and dermal exposure scenarios. The PBTK model structure has been constructed to give a good simulation performance of biochemical processes within the human body. The model parameters are calibrated based on oral and dermal experimental data for the Caucasian population available in the literature. Particular attention is given to modelling the absorption stage (skin and gastrointestinal tract) in the form of several sub-compartments. This gives better model prediction results when compared to those of a PBTK model with a simpler structure of the absorption barrier. In addition, the role of quantitative structure-property relationships (QSPRs) in predicting skin penetration is evaluated for the three substances with a view to incorporating QSPR-predicted penetration parameters in the PBTK model when experimental values are lacking. Finally, PBTK modelling is used, first to extrapolate oral NOAEL doses derived from rat studies to humans, and then to simulate internal systemic/liver concentrations - Area Under Curve (AUC) and peak concentration - resulting from specified dermal and oral exposure conditions. Based on these simulations, AUC-based dermal thresholds for the three case study compounds are derived and compared with the experimentally obtained oral threshold (NOAEL) values.

  13. Determination of navigation FDI thresholds using a Markov model. [Failure Detection and Identification in triplex inertial platform systems for Shuttle entry

    NASA Technical Reports Server (NTRS)

    Walker, B. K.; Gai, E.

    1978-01-01

    A method for determining time-varying Failure Detection and Identification (FDI) thresholds for single sample decision functions is described in the context of a triplex system of inertial platforms. A cost function consisting of the probability of vehicle loss due to FDI decision errors is minimized. A discrete Markov model is constructed from which this cost can be determined as a function of the decision thresholds employed to detect and identify the first and second failures. Optimal thresholds are determined through the use of parameter optimization techniques. The application of this approach to threshold determination is illustrated for the Space Shuttle's inertial measurement instruments.

  14. Porcine skin visible lesion thresholds for near-infrared lasers including modeling at two pulse durations and spot sizes

    NASA Astrophysics Data System (ADS)

    Cain, Clarence P.; Polhamus, Garrett D.; Roach, William P.; Stolarski, David J.; Schuster, Kurt J.; Stockton, Kevin; Rockwell, Benjamin A.; Chen, Bo; Welch, Ashley J.

    2006-07-01

    With the advent of such systems as the airborne laser and advanced tactical laser, high-energy lasers that use 1315-nm wavelengths in the near-infrared band will soon present a new laser safety challenge to armed forces and civilian populations. Experiments in nonhuman primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular, and retinal lesions as a function of pulse duration. American National Standards Institute (ANSI) laser safety standards have traditionally been based on experimental data, and there is scant data for this wavelength. We are reporting minimum visible lesion (MVL) threshold measurements using a porcine skin model for two different pulse durations and spot sizes for this wavelength. We also compare our measurements to results from our model based on the heat transfer equation and rate process equation, together with actual temperature measurements on the skin surface using a high-speed infrared camera. Our MVL-ED50 thresholds for long pulses (350 µs) at 24-h postexposure are measured to be 99 and 83 Jcm-2 for spot sizes of 0.7 and 1.3 mm diam, respectively. Q-switched laser pulses of 50 ns have a lower threshold of 11 Jcm-2 for a 5-mm-diam top-hat laser pulse.

  15. Spatiotemporal and Spatial Threshold Models for Relating UV Exposures and Skin Cancer in the Central United States

    PubMed Central

    Hatfield, Laura A.; Hoffbeck, Richard W.; Alexander, Bruce H.; Carlin, Bradley P.

    2009-01-01

    The exact mechanisms relating exposure to ultraviolet (UV) radiation and elevated risk of skin cancer remain the subject of debate. For example, there is disagreement on whether the main risk factor is duration of the exposure, its intensity, or some combination of both. There is also uncertainty regarding the form of the dose-response curve, with many authors believing only exposures exceeding a given (but unknown) threshold are important. In this paper we explore methods to estimate such thresholds using hierarchical spatial logistic models based on a sample of a cohort of x-ray technologists for whom we have self-reports of time spent in the sun and numbers of blistering sunburns in childhood. A preliminary goal is to explore the temporal pattern of UV exposure and its gradient. Changes here would imply that identical exposure self-reports from different calendar years may correspond to differing cancer risks. PMID:20161236

  16. Modeling of Beams’ Multiple-Contact Mode with an Application in the Design of a High-g Threshold Microaccelerometer

    PubMed Central

    Li, Kai; Chen, Wenyuan; Zhang, Weiping

    2011-01-01

    Beam’s multiple-contact mode, characterized by multiple and discrete contact regions, non-uniform stoppers’ heights, irregular contact sequence, seesaw-like effect, indirect interaction between different stoppers, and complex coupling relationship between loads and deformation is studied. A novel analysis method and a novel high speed calculation model are developed for multiple-contact mode under mechanical load and electrostatic load, without limitations on stopper height and distribution, providing the beam has stepped or curved shape. Accurate values of deflection, contact load, contact region and so on are obtained directly, with a subsequent validation by CoventorWare. A new concept design of high-g threshold microaccelerometer based on multiple-contact mode is presented, featuring multiple acceleration thresholds of one sensitive component and consequently small sensor size. PMID:22163897

  17. A threshold-voltage model for small-scaled GaAs nMOSFET with stacked high-k gate dielectric

    NASA Astrophysics Data System (ADS)

    Chaowen, Liu; Jingping, Xu; Lu, Liu; Hanhan, Lu; Yuan, Huang

    2016-02-01

    A threshold-voltage model for a stacked high-k gate dielectric GaAs MOSFET is established by solving a two-dimensional Poisson's equation in channel and considering the short-channel, DIBL and quantum effects. The simulated results are in good agreement with the Silvaco TCAD data, confirming the correctness and validity of the model. Using the model, impacts of structural and physical parameters of the stack high-k gate dielectric on the threshold-voltage shift and the temperature characteristics of the threshold voltage are investigated. The results show that the stacked gate dielectric structure can effectively suppress the fringing-field and DIBL effects and improve the threshold and temperature characteristics, and on the other hand, the influence of temperature on the threshold voltage is overestimated if the quantum effect is ignored. Project supported by the National Natural Science Foundation of China (No. 61176100).

  18. Modeling on oxide dependent 2DEG sheet charge density and threshold voltage in AlGaN/GaN MOSHEMT

    NASA Astrophysics Data System (ADS)

    Panda, J.; Jena, K.; Swain, R.; Lenka, T. R.

    2016-04-01

    We have developed a physics based analytical model for the calculation of threshold voltage, two dimensional electron gas (2DEG) density and surface potential for AlGaN/GaN metal oxide semiconductor high electron mobility transistors (MOSHEMT). The developed model includes important parameters like polarization charge density at oxide/AlGaN and AlGaN/GaN interfaces, interfacial defect oxide charges and donor charges at the surface of the AlGaN barrier. The effects of two different gate oxides (Al2O3 and HfO2) are compared for the performance evaluation of the proposed MOSHEMT. The MOSHEMTs with Al2O3 dielectric have an advantage of significant increase in 2DEG up to 1.2 × 1013 cm-2 with an increase in oxide thickness up to 10 nm as compared to HfO2 dielectric MOSHEMT. The surface potential for HfO2 based device decreases from 2 to -1.6 eV within 10 nm of oxide thickness whereas for the Al2O3 based device a sharp transition of surface potential occurs from 2.8 to -8.3 eV. The variation in oxide thickness and gate metal work function of the proposed MOSHEMT shifts the threshold voltage from negative to positive realizing the enhanced mode operation. Further to validate the model, the device is simulated in Silvaco Technology Computer Aided Design (TCAD) showing good agreement with the proposed model results. The accuracy of the developed calculations of the proposed model can be used to develop a complete physics based 2DEG sheet charge density and threshold voltage model for GaN MOSHEMT devices for performance analysis.

  19. Low-frequency Raman scattering in model disordered solids: percolators above threshold

    NASA Astrophysics Data System (ADS)

    Pilla, O.; Viliani, G.; Dell'Anna, R.; Ruocco, G.

    1997-02-01

    The Raman coupling coefficients of site- and bond-percolators at concentration higher than percolation threshold are computed for two scattering mechanisms: bond polarizability (BPOL) and dipole-induced-dipole (DID). The results show that DID does not follow a scaling law at low frequency, while in the case of BPOL the situation is less clear. The numerically computed frequency dependence in the case of BPOL, which can be considered a good scattering mechanism for a wide class of real glasses, is in semiquantitative agreement with experimental results.

  20. Establishing a rainfall threshold for flash flood warnings in China's mountainous areas based on a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Miao, Qinghua; Yang, Dawen; Yang, Hanbo; Li, Zhe

    2016-10-01

    Flash flooding is one of the most common natural hazards in China, particularly in mountainous areas, and usually causes heavy damage and casualties. However, the forecasting of flash flooding in mountainous regions remains challenging because of the short response time and limited monitoring capacity. This paper aims to establish a strategy for flash flood warnings in mountainous ungauged catchments across humid, semi-humid and semi-arid regions of China. First, we implement a geomorphology-based hydrological model (GBHM) in four mountainous catchments with drainage areas that ranges from 493 to 1601 km2. The results show that the GBHM can simulate flash floods appropriately in these four study catchments. We propose a method to determine the rainfall threshold for flood warning by using frequency analysis and binary classification based on long-term GBHM simulations that are forced by historical rainfall data to create a practically easy and straightforward approach for flash flood forecasting in ungauged mountainous catchments with drainage areas from tens to hundreds of square kilometers. The results show that the rainfall threshold value decreases significantly with increasing antecedent soil moisture in humid regions, while this value decreases slightly with increasing soil moisture in semi-humid and semi-arid regions. We also find that accumulative rainfall over a certain time span (or rainfall over a long time span) is an appropriate threshold for flash flood warnings in humid regions because the runoff is dominated by excess saturation. However, the rainfall intensity (or rainfall over a short time span) is more suitable in semi-humid and semi-arid regions because excess infiltration dominates the runoff in these regions. We conduct a comprehensive evaluation of the rainfall threshold and find that the proposed method produces reasonably accurate flash flood warnings in the study catchments. An evaluation of the performance at uncalibrated interior points

  1. Population bases and the 2011 Census.

    PubMed

    Smallwood, Steve

    2011-01-01

    In an increasingly complex society there are a number of different population definitions that can be relevant for users, beyond the standard definition used in counting the population. This article describes the enumeration base for the 2011 Census and how alternative population outputs may be produced. It provides a background as to how the questions on the questionnaire were decided upon and how population bases can be constructed from the Census. Similarities and differences between the information collected across the three UK Censuses (England and Wales, Scotland and Northern Ireland) are discussed. Finally, issues around estimating the population on alternative bases are presented.

  2. Double Photoionization Near Threshold

    NASA Technical Reports Server (NTRS)

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  3. Cavitation thresholds of contrast agents in an in vitro human clot model exposed to 120-kHz ultrasound

    PubMed Central

    Gruber, Matthew J.; Bader, Kenneth B.; Holland, Christy K.

    2014-01-01

    Ultrasound contrast agents (UCAs) can be employed to nucleate cavitation to achieve desired bioeffects, such as thrombolysis, in therapeutic ultrasound applications. Effective methods of enhancing thrombolysis with ultrasound have been examined at low frequencies (<1 MHz) and low amplitudes (<0.5 MPa). The objective of this study was to determine cavitation thresholds for two UCAs exposed to 120-kHz ultrasound. A commercial ultrasound contrast agent (Definity®) and echogenic liposomes were investigated to determine the acoustic pressure threshold for ultraharmonic (UH) and broadband (BB) generation using an in vitro flow model perfused with human plasma. Cavitation emissions were detected using two passive receivers over a narrow frequency bandwidth (540–900 kHz) and a broad frequency bandwidth (0.54–1.74 MHz). UH and BB cavitation thresholds occurred at the same acoustic pressure (0.3 ± 0.1 MPa, peak to peak) and were found to depend on the sensitivity of the cavitation detector but not on the nucleating contrast agent or ultrasound duty cycle. PMID:25234874

  4. Cavitation thresholds of contrast agents in an in vitro human clot model exposed to 120-kHz ultrasound.

    PubMed

    Gruber, Matthew J; Bader, Kenneth B; Holland, Christy K

    2014-02-01

    Ultrasound contrast agents (UCAs) can be employed to nucleate cavitation to achieve desired bioeffects, such as thrombolysis, in therapeutic ultrasound applications. Effective methods of enhancing thrombolysis with ultrasound have been examined at low frequencies (<1 MHz) and low amplitudes (<0.5 MPa). The objective of this study was to determine cavitation thresholds for two UCAs exposed to 120-kHz ultrasound. A commercial ultrasound contrast agent (Definity(®)) and echogenic liposomes were investigated to determine the acoustic pressure threshold for ultraharmonic (UH) and broadband (BB) generation using an in vitro flow model perfused with human plasma. Cavitation emissions were detected using two passive receivers over a narrow frequency bandwidth (540-900 kHz) and a broad frequency bandwidth (0.54-1.74 MHz). UH and BB cavitation thresholds occurred at the same acoustic pressure (0.3 ± 0.1 MPa, peak to peak) and were found to depend on the sensitivity of the cavitation detector but not on the nucleating contrast agent or ultrasound duty cycle.

  5. Ground-water vulnerability to nitrate contamination at multiple thresholds in the mid-Atlantic region using spatial probability models

    USGS Publications Warehouse

    Greene, Earl A.; LaMotte, Andrew E.; Cullinan, Kerri-Ann

    2005-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency?s Regional Vulnerability Assessment Program, has developed a set of statistical tools to support regional-scale, ground-water quality and vulnerability assessments. The Regional Vulnerability Assessment Program?s goals are to develop and demonstrate approaches to comprehensive, regional-scale assessments that effectively inform managers and decision-makers as to the magnitude, extent, distribution, and uncertainty of current and anticipated environmental risks. The U.S. Geological Survey is developing and exploring the use of statistical probability models to characterize the relation between ground-water quality and geographic factors in the Mid-Atlantic Region. Available water-quality data obtained from U.S. Geological Survey National Water-Quality Assessment Program studies conducted in the Mid-Atlantic Region were used in association with geographic data (land cover, geology, soils, and others) to develop logistic-regression equations that use explanatory variables to predict the presence of a selected water-quality parameter exceeding a specified management concentration threshold. The resulting logistic-regression equations were transformed to determine the probability, P(X), of a water-quality parameter exceeding a specified management threshold. Additional statistical procedures modified by the U.S. Geological Survey were used to compare the observed values to model-predicted values at each sample point. In addition, procedures to evaluate the confidence of the model predictions and estimate the uncertainty of the probability value were developed and applied. The resulting logistic-regression models were applied to the Mid-Atlantic Region to predict the spatial probability of nitrate concentrations exceeding specified management thresholds. These thresholds are usually set or established by regulators or managers at National or local levels. At management thresholds of

  6. EDGE2D-EIRENE modelling of near SOL E r: possible impact on the H-mode power threshold

    NASA Astrophysics Data System (ADS)

    Chankin, A. V.; Delabie, E.; Corrigan, G.; Harting, D.; Maggi, C. F.; Meyer, H.; Contributors, JET

    2017-04-01

    Recent EDGE2D-EIRENE simulations of JET plasmas showed a significant difference between radial electric field (E r) profiles across the separatrix in two divertor configurations, with the outer strike point on the horizontal target (HT) and vertical target (VT) (Chankin et al 2016 Nucl. Mater. Energy, doi: 10.1016/j.nme.2016.10.004). Under conditions (input power, plasma density) where the HT plasma went into the H-mode, a large positive E r spike in the near scrape-off layer (SOL) was seen in the code output, leading to a very large E × B shear across the separatrix over a narrow region of a fraction of a cm width. No such E r feature was obtained in the code solution for the VT configuration, where the H-mode power threshold was found to be twice as high as in the HT configuration. It was hypothesised that the large E × B shear across the separatrix in the HT configuration could be responsible for the turbulence suppression leading to an earlier (at lower input power) L–H transition compared to the VT configuration. In the present work these ideas are extended to cover some other experimental observations on the H-mode power threshold variation with parameters which typically are not included in the multi-machine H-mode power threshold scalings, namely: ion mass dependence (isotope H–D–T exchange), dependence on the ion ∇B drift direction, and dependence on the wall material composition (ITER-like wall versus carbon wall in JET). In all these cases EDGE2D-EIRENE modelling shows larger positive E r spikes in the near SOL under conditions where the H-mode power threshold is lower, at least in the HT configuration.

  7. [Tremendous Human, Social, and Economic Losses Caused by Obstinate Application of the Failed Linear No-threshold Model].

    PubMed

    Sutou, Shizuyo

    2015-01-01

    The linear no-threshold model (LNT) was recommended in 1956, with abandonment of the traditional threshold dose-response for genetic risk assessment. Adoption of LNT by the International Commission on Radiological Protection (ICRP) became the standard for radiation regulation worldwide. The ICRP recommends a dose limit of 1 mSv/year for the public, which is too low and which terrorizes innocent people. Indeed, LNT arose mainly from the lifespan survivor study (LSS) of atomic bomb survivors. The LSS, which asserts linear dose-response and no threshold, is challenged mainly on three points. 1) Radiation doses were underestimated by half because of disregard for major residual radiation, resulting in cancer risk overestimation. 2) The dose and dose-rate effectiveness factor (DDREF) of 2 is used, but the actual DDREF is estimated as 16, resulting in cancer risk overestimation by several times. 3) Adaptive response (hormesis) is observed in leukemia and solid cancer cases, consistently contradicting the linearity of LNT. Drastic reduction of cancer risk moves the dose-response curve close to the control line, allowing the setting of a threshold. Living organisms have been evolving for 3.8 billion years under radiation exposure, naturally acquiring various defense mechanisms such as DNA repair mechanisms, apoptosis, and immune response. The failure of LNT lies in the neglect of carcinogenesis and these biological mechanisms. Obstinate application of LNT continues to cause tremendous human, social, and economic losses. The 60-year-old LNT must be rejected to establish a new scientific knowledge-based system.

  8. A continuum model with a percolation threshold and tunneling-assisted interfacial conductivity for carbon nanotube-based nanocomposites

    SciTech Connect

    Wang, Yang; Weng, George J.; Meguid, Shaker A.; Hamouda, Abdel Magid

    2014-05-21

    A continuum model that possesses several desirable features of the electrical conduction process in carbon-nanotube (CNT) based nanocomposites is developed. Three basic elements are included: (i) percolation threshold, (ii) interface effects, and (iii) tunneling-assisted interfacial conductivity. We approach the first one through the selection of an effective medium theory. We approach the second one by the introduction of a diminishing layer of interface with an interfacial conductivity to build a 'thinly coated' CNT. The third one is introduced through the observation that interface conductivity can be enhanced by electron tunneling which in turn can be facilitated with the formation of CNT networks. We treat this last issue in a continuum fashion by taking the network formation as a statistical process that can be represented by Cauchy's probability density function. The outcome is a simple and yet widely useful model that can simultaneously capture all these fundamental characteristics. It is demonstrated that, without considering the interface effect, the predicted conductivity would be too high, and that, without accounting for the additional contribution from the tunneling-assisted interfacial conductivity, the predicted conductivity beyond the percolation threshold would be too low. It is with the consideration of all three elements that the theory can fully account for the experimentally measured data. We further use the developed model to demonstrate that, despite the anisotropy of the intrinsic CNT conductivity, it is its axial component along the CNT direction that dominates the overall conductivity. This theory is also proved that, even with a totally insulating matrix, it is still capable of delivering non-zero conductivity beyond the percolation threshold.

  9. Fire in a Changing Climate: Stochastic versus Threshold-constrained Ignitions in a Dynamic Global Vegetation Model

    NASA Astrophysics Data System (ADS)

    Sheehan, T.; Bachelet, D. M.; Ferschweiler, K.

    2015-12-01

    The MC2 dynamic global vegetation model fire module simulates fire occurrence, area burned, and fire impacts including mortality, biomass burned, and nitrogen volatilization. Fire occurrence is based on fuel load levels and vegetation-specific thresholds for three calculated fire weather indices: fine fuel moisture code (FFMC) for the moisture content of fine fuels; build-up index (BUI) for the total amount of fuel available for combustion; and energy release component (ERC) for the total energy available to fire. Ignitions are assumed (i.e. the probability of an ignition source is 1). The model is run with gridded inputs and the fraction of each grid cell burned is limited by a vegetation-specific fire return period (FRP) and the number of years since the last fire occurred in the grid cell. One consequence of assumed ignitions FRP constraint is that similar fire behavior can take place over large areas with identical vegetation type. In regions where thresholds are often exceeded, fires occur frequently (annually in some instances) with a very low fraction of a cell burned. In areas where fire is infrequent, a single hot, dry climate event can result in intense fire over a large region. Both cases can potentially result in large areas with uniform vegetation type and age. To better reflect realistic fire occurrence, we have developed a stochastic fire occurrence model that: a) uses a map of relative ignition probability and a multiplier to alter overall ignition occurrence; b) adjusts the original fixed fire thresholds with ignition success probabilities based on fire weather indices; and c) calculates spread by using a probability based on slope and wind direction. A Monte Carlo method is used with all three algorithms to determine occurrence. The new stochastic ignition approach yields more variety in fire intensity, a smaller annual total of cells burned, and patchier vegetation.

  10. Modeling of damage generation mechanisms in silicon at energies below the displacement threshold

    SciTech Connect

    Santos, Ivan; Marques, Luis A.; Pelaz, Lourdes

    2006-11-01

    We have used molecular dynamics simulation techniques to study the generation of damage in Si within the low-energy deposition regime. We have demonstrated that energy transfers below the displacement threshold can produce a significant amount of damage, usually neglected in traditional radiation damage calculations. The formation of amorphous pockets agrees with the thermal spike concept of local melting. However, we have found that the order-disorder transition is not instantaneous, but it requires some time to reach the appropriate kinetic-potential energy redistribution for melting. The competition between the rate of this energy redistribution and the energy diffusion to the surrounding atoms determines the amount of damage generated by a given deposited energy. Our findings explain the diverse damage morphology produced by ions of different masses.

  11. Applications of threshold models and the weighted bootstrap for Hungarian precipitation data

    NASA Astrophysics Data System (ADS)

    Varga, László; Rakonczai, Pál; Zempléni, András

    2016-05-01

    This paper presents applications of the peaks-over-threshold methodology for both the univariate and the recently introduced bivariate case, combined with a novel bootstrap approach. We compare the proposed bootstrap methods to the more traditional profile likelihood. We have investigated 63 years of the European Climate Assessment daily precipitation data for five Hungarian grid points, first separately for the summer and winter months, then aiming at the detection of possible changes by investigating 20 years moving windows. We show that significant changes can be observed both in the univariate and the bivariate cases, the most recent period being the most dangerous in several cases, as some return values have increased substantially. We illustrate these effects by bivariate coverage regions.

  12. Population-Based Smoking Cessation Strategies

    PubMed Central

    2010-01-01

    Executive Summary Objective The objective of this report was to provide the Ministry of Health Promotion (MHP) with a summary of existing evidence-based reviews of the clinical and economic outcomes of population-based smoking cessation strategies. Background Tobacco use is the leading cause of preventable disease and death in Ontario, linked to approximately 13,000 avoidable premature deaths annually – the vast majority of these are attributable to cancer, cardiovascular disease, and chronic obstructive lung disease. (1) In Ontario, tobacco related health care costs amount to $6.1 billion annually, or about $502 per person (including non-smokers) and account for 1.4% of the provincial domestic product. (2) In 2007, there were approximately 1.7 to 1.9 million smokers in Ontario with two-thirds of these intending to quit in the next six months and one-third wanting to quit within 30 days. (3) In 2007/2008, Ontario invested $15 million in cessation programs, services and training. (4) In June 2009, the Ministry of Health Promotion (MHP) requested that MAS provide a summary of the evidence base surrounding population-based smoking cessation strategies. Project Scope The MAS and the MHP agreed that the project would consist of a clinical and economic summary of the evidence surrounding nine population-based strategies for smoking cessation including: Mass media interventions Telephone counselling Post-secondary smoking cessation programs (colleges/universities) Community-wide stop-smoking contests (i.e. Quit and Win) Community interventions Physician advice to quit Nursing interventions for smoking cessation Hospital-based interventions for smoking cessation Pharmacotherapies for smoking cessation, specifically: Nicotine replacement therapies Antidepressants Anxiolytic drugs Opioid antagonists Clonidine Nicotine receptor partial agonists Reviews examining interventions for Cut Down to Quit (CDTQ) or harm reduction were not included in this review. In addition

  13. Threshold driven response of permafrost in Northern Eurasia to climate and environmental change: from conceptual model to quantitative assessment

    NASA Astrophysics Data System (ADS)

    Anisimov, Oleg; Kokorev, Vasiliy; Reneva, Svetlana; Shiklomanov, Nikolai

    2010-05-01

    Numerous efforts have been made to access the environmental impacts of changing climate in permafrost regions using mathematical models. Despite the significant improvements in representation of individual sub-systems, such as permafrost, vegetation, snow and hydrology, even the most comprehensive models do not replicate the coupled non-linear interactions between them that lead to threshold-driven changes. Observations indicate that ecosystems may change dramatically, rapidly, and often irreversibly, reaching fundamentally different state once they pass a critical threshold. The key to understanding permafrost threshold phenomena is interaction with other environmental factors that are very likely to change in response to climate warming. One of such factors is vegetation. Vegetation control over the thermal state of underlying ground is two-fold. Firstly, canopies have different albedo that affects the radiation balance at the soil surface. Secondly, depending on biome composition vegetation canopy may have different thermal conductivity that governs the heat fluxes between soil and atmosphere. There are clear indications based on ground observations and remote sensing that vegetation has already been changed in response to climatic warming, in consensus with the results of manipulations at experimental plots that involve artificial warming and CO2 fertilization. Under sustained warming lower vegetation (mosses, lichens) is gradually replaced by shrubs. Mosses have high thermal insolating effect in summer, which is why their retreat enhances permafrost warming. Taller shrubs accumulate snow that further warms permafrost in winter. Permafrost remains unchanged as long as responding vegetation intercepts and mitigates the climate change signal. Beyond certain threshold enhanced abundance and growth of taller vegetation leads to abrupt permafrost changes. Changes in hydrology, i.e. soil wetting or drying, may have similar effect on permafrost. Wetting increases soil

  14. Numerical modeling of gun experiments with impact velocities less than SDT threshold: Thermal explosion initiated by friction heat

    NASA Astrophysics Data System (ADS)

    Barfield, W. D.

    1982-01-01

    One and two dimensional calculations were made to model thermal explosion ignited by friction heat, hypothesized as an initiation mechanism for the unknown XDT phenomenon that is responsible for detonations observed in gun experiments with impact velocities less than threshold for shock to detonation transition. Preliminary results suggest that friction induced thermal explosion would be quenched by cooling associated with side rarefactions after penetrating only a thin layer of the propellant. Other effects would be expected to increase the calculated heating rates or speed up the friction induced thermal explosion. For this reason, friction cannot be ruled out as an initiation mechanism on the basis of the results described.

  15. An experimental operative system for shallow landslide and flash flood warning based on rainfall thresholds and soil moisture modelling

    NASA Astrophysics Data System (ADS)

    Brigandı, G.; Aronica, G. T.; Basile, G.; Pasotti, L.; Panebianco, M.

    2012-04-01

    On November 2011 a thunderstorms became almost exceptional over the North-East part of the Sicily Region (Italy) producing local heavy rainfall, mud-debris flow and flash flooding. The storm was concentrated on the Tyrrhenian sea coast near the city of Barcellona within the Longano catchment. Main focus of the paper is to present an experimental operative system for alerting extreme hydrometeorological events by using a methodology based on the combined use of rainfall thresholds, soil moisture indexes and quantitative precipitation forecasting. As matter of fact, shallow landslide and flash flood warning is a key element to improve the Civil Protection achievements to mitigate damages and safeguard the security of people. It is a rather complicated task, particularly in those catchments with flashy response where even brief anticipations are important and welcomed. It is well known how the triggering of shallow landslides is strongly influenced by the initial soil moisture conditions of catchments. Therefore, the early warning system here applied is based on the combined use of rainfall thresholds, derived both for flash flood and for landslide, and soil moisture conditions; the system is composed of several basic component related to antecedent soil moisture conditions, real-time rainfall monitoring and antecedent rainfall. Soil moisture conditions were estimated using an Antecedent Precipitation Index (API), similar to this widely used for defining soil moisture conditions via Antecedent Moisture conditions index AMC. Rainfall threshold for landslides were derived using historical and statistical analysis. Finally, rainfall thresholds for flash flooding were derived using an Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall. After the implementation and calibration of the model, a testing phase was carried out by using real data collected for the November 2001 event in the Longano catchment. Moreover, in

  16. Multi-host model and threshold of intermediate host Oncomelania snail density for eliminating schistosomiasis transmission in China

    PubMed Central

    Zhou, Yi-Biao; Chen, Yue; Liang, Song; Song, Xiu-Xia; Chen, Geng-Xin; He, Zhong; Cai, Bin; Yihuo, Wu-Li; He, Zong-Gui; Jiang, Qing-Wu

    2016-01-01

    Schistosomiasis remains a serious public health issue in many tropical countries, with more than 700 million people at risk of infection. In China, a national integrated control strategy, aiming at blocking its transmission, has been carried out throughout endemic areas since 2005. A longitudinal study was conducted to determine the effects of different intervention measures on the transmission dynamics of S. japonicum in three study areas and the data were analyzed using a multi-host model. The multi-host model was also used to estimate the threshold of Oncomelania snail density for interrupting schistosomiasis transmission based on the longitudinal data as well as data from the national surveillance system for schistosomiasis. The data showed a continuous decline in the risk of human infection and the multi-host model fit the data well. The 25th, 50th and 75th percentiles, and the mean of estimated thresholds of Oncomelania snail density below which the schistosomiasis transmission cannot be sustained were 0.006, 0.009, 0.028 and 0.020 snails/0.11 m2, respectively. The study results could help develop specific strategies of schistosomiasis control and elimination tailored to the local situation for each endemic area. PMID:27535177

  17. [Threshold value for reimbursement of costs of new drugs: cost-effectiveness research and modelling are essential links].

    PubMed

    Frederix, Geert W J; Hövels, Anke M; Severens, Johan L; Raaijmakers, Jan A M; Schellens, Jan H M

    2015-01-01

    There is increasing discussion in the Netherlands about the introduction of a threshold value for the costs per extra year of life when reimbursing costs of new drugs. The Medicines Committee ('Commissie Geneesmiddelen'), a division of the Netherlands National Healthcare Institute ('Zorginstituut Nederland'), advises on reimbursement of costs of new drugs. This advice is based upon the determination of therapeutic value of the drug and the results of economic evaluations. Mathematical models that predict future costs and effectiveness are often used in economic evaluations; these models can vary greatly in transparency and quality due to author assumptions. Standardisation of cost-effectiveness models is one solution to overcome the unwanted variation in quality. Discussions about the introduction of a threshold value can only be meaningful if all involved are adequately informed, and by high quality in cost-effectiveness research and, particularly, economic evaluations. Collaboration and discussion between medical specialists, patients or patient organisations, health economists and policy makers, both in development of methods and in standardisation, are essential to improve the quality of decision making.

  18. Computational modeling of glucose transport in pancreatic β-cells identifies metabolic thresholds and therapeutic targets in diabetes.

    PubMed

    Luni, Camilla; Marth, Jamey D; Doyle, Francis J

    2012-01-01

    Pancreatic β-cell dysfunction is a diagnostic criterion of Type 2 diabetes and includes defects in glucose transport and insulin secretion. In healthy individuals, β-cells maintain plasma glucose concentrations within a narrow range in concert with insulin action among multiple tissues. Postprandial elevations in blood glucose facilitate glucose uptake into β-cells by diffusion through glucose transporters residing at the plasma membrane. Glucose transport is essential for glycolysis and glucose-stimulated insulin secretion. In human Type 2 diabetes and in the mouse model of obesity-associated diabetes, a marked deficiency of β-cell glucose transporters and glucose uptake occurs with the loss of glucose-stimulated insulin secretion. Recent studies have shown that the preservation of glucose transport in β-cells maintains normal insulin secretion and blocks the development of obesity-associated diabetes. To further elucidate the underlying mechanisms, we have constructed a computational model of human β-cell glucose transport in health and in Type 2 diabetes, and present a systems analysis based on experimental results from human and animal studies. Our findings identify a metabolic threshold or "tipping point" whereby diminished glucose transport across the plasma membrane of β-cells limits intracellular glucose-6-phosphate production by glucokinase. This metabolic threshold is crossed in Type 2 diabetes and results in β-cell dysfunction including the loss of glucose stimulated insulin secretion. Our model further discriminates among molecular control points in this pathway wherein maximal therapeutic intervention is achieved.

  19. Linking neocortical, cognitive, and genetic variability in autism with alterations of brain plasticity: the Trigger-Threshold-Target model.

    PubMed

    Mottron, Laurent; Belleville, Sylvie; Rouleau, Guy A; Collignon, Olivier

    2014-11-01

    The phenotype of autism involves heterogeneous adaptive traits (strengths vs. disabilities), different domains of alterations (social vs. non-social), and various associated genetic conditions (syndromic vs. nonsyndromic autism). Three observations suggest that alterations in experience-dependent plasticity are an etiological factor in autism: (1) the main cognitive domains enhanced in autism are controlled by the most plastic cortical brain regions, the multimodal association cortices; (2) autism and sensory deprivation share several features of cortical and functional reorganization; and (3) genetic mutations and/or environmental insults involved in autism all appear to affect developmental synaptic plasticity, and mostly lead to its upregulation. We present the Trigger-Threshold-Target (TTT) model of autism to organize these findings. In this model, genetic mutations trigger brain reorganization in individuals with a low plasticity threshold, mostly within regions sensitive to cortical reallocations. These changes account for the cognitive enhancements and reduced social expertise associated with autism. Enhanced but normal plasticity may underlie non-syndromic autism, whereas syndromic autism may occur when a triggering mutation or event produces an altered plastic reaction, also resulting in intellectual disability and dysmorphism in addition to autism. Differences in the target of brain reorganization (perceptual vs. language regions) account for the main autistic subgroups. In light of this model, future research should investigate how individual and sex-related differences in synaptic/regional brain plasticity influence the occurrence of autism.

  20. A threshold of mechanical strain intensity for the direct activation of osteoblast function exists in a murine maxilla loading model.

    PubMed

    Suzuki, Natsuki; Aoki, Kazuhiro; Marcián, Petr; Borák, Libor; Wakabayashi, Noriyuki

    2016-10-01

    The response to the mechanical loading of bone tissue has been extensively investigated; however, precisely how much strain intensity is necessary to promote bone formation remains unclear. Combination studies utilizing histomorphometric and numerical analyses were performed using the established murine maxilla loading model to clarify the threshold of mechanical strain needed to accelerate bone formation activity. For 7 days, 191 kPa loading stimulation for 30 min/day was applied to C57BL/6J mice. Two regions of interest, the AWAY region (away from the loading site) and the NEAR region (near the loading site), were determined. The inflammatory score increased in the NEAR region, but not in the AWAY region. A strain intensity map obtained from [Formula: see text] images was superimposed onto the images of the bone formation inhibitor, sclerostin-positive cell localization. The number of sclerostin-positive cells significantly decreased after mechanical loading of more than [Formula: see text] in the AWAY region, but not in the NEAR region. The mineral apposition rate, which shows the bone formation ability of osteoblasts, was accelerated at the site of surface strain intensity, namely around [Formula: see text], but not at the site of lower surface strain intensity, which was around [Formula: see text] in the AWAY region, thus suggesting the existence of a strain intensity threshold for promoting bone formation. Taken together, our data suggest that a threshold of mechanical strain intensity for the direct activation of osteoblast function and the reduction of sclerostin exists in a murine maxilla loading model in the non-inflammatory region.

  1. Development of a threshold model to predict germination of Populus tomentosa seeds after harvest and storage under ambient condition.

    PubMed

    Wang, Wei-Qing; Cheng, Hong-Yan; Song, Song-Quan

    2013-01-01

    Effects of temperature, storage time and their combination on germination of aspen (Populus tomentosa) seeds were investigated. Aspen seeds were germinated at 5 to 30°C at 5°C intervals after storage for a period of time under 28°C and 75% relative humidity. The effect of temperature on aspen seed germination could not be effectively described by the thermal time (TT) model, which underestimated the germination rate at 5°C and poorly predicted the time courses of germination at 10, 20, 25 and 30°C. A modified TT model (MTT) which assumed a two-phased linear relationship between germination rate and temperature was more accurate in predicting the germination rate and percentage and had a higher likelihood of being correct than the TT model. The maximum lifetime threshold (MLT) model accurately described the effect of storage time on seed germination across all the germination temperatures. An aging thermal time (ATT) model combining both the TT and MLT models was developed to describe the effect of both temperature and storage time on seed germination. When the ATT model was applied to germination data across all the temperatures and storage times, it produced a relatively poor fit. Adjusting the ATT model to separately fit germination data at low and high temperatures in the suboptimal range increased the models accuracy for predicting seed germination. Both the MLT and ATT models indicate that germination of aspen seeds have distinct physiological responses to temperature within a suboptimal range.

  2. Dynamically Sliding Threshold Model Reproduces the Initial-Strength Dependence of Spike-Timing Dependent Synaptic Plasticity

    NASA Astrophysics Data System (ADS)

    Kurashige, Hiroki; Sakai, Yutaka

    2007-11-01

    It has been considered that an amount of calcium elevation in a synaptic spine determines whether the synapse is potentiated or depressed. However, it has been pointed out that simple application of the principle can not reproduce properties of spike-timing-dependent plasticity (STDP). To solve the problem, we present a possible mechanism using dynamically sliding threshold determined as the linear summation of calcium elevations induced by single pre- and post-synaptic spikes. We demonstrate that the model can reproduce the timing dependence of biological STDP. In addition, we find that the model can reproduce the dependence of biological STDP on the initial synaptic strength, which is found to be asymmetric for synaptic potentiation and depression, whereas no explicit initial-strength dependence nor asymmetric mechanism are incorporated into the model.

  3. Modeling the calcium spike as a threshold triggered fixed waveform for synchronous inputs in the fluctuation regime

    PubMed Central

    Chua, Yansong; Morrison, Abigail; Helias, Moritz

    2015-01-01

    Modeling the layer 5 pyramidal neuron as a system of three connected isopotential compartments, the soma, proximal, and distal compartment, with calcium spike dynamics in the distal compartment following first order kinetics, we are able to reproduce in-vitro experimental results which demonstrate the involvement of calcium spikes in action potentials generation. To explore how calcium spikes affect the neuronal output in-vivo, we emulate in-vivo like conditions by embedding the neuron model in a regime of low background fluctuations with occasional large synchronous inputs. In such a regime, a full calcium spike is only triggered by the synchronous events in a threshold like manner and has a stereotypical waveform. Hence, in such a regime, we are able to replace the calcium dynamics with a simpler threshold triggered current of fixed waveform, which is amenable to analytical treatment. We obtain analytically the mean somatic membrane potential excursion due to a calcium spike being triggered while in the fluctuating regime. Our analytical form that accounts for the covariance between conductances and the membrane potential shows a better agreement with simulation results than a naive first order approximation. PMID:26283954

  4. Modelling of capacitance and threshold voltage for ultrathin normally-off AlGaN /GaN MOSHEMT

    NASA Astrophysics Data System (ADS)

    Swain, R.; Jena, K.; Lenka, T. R.

    2017-01-01

    A compact quantitative model based on oxide semiconductor interface density of states (DOS) is proposed for Al0.25Ga0.75N/GaN metal oxide semiconductor high electron mobility transistor (MOSHEMT). Mathematical expressions for surface potential, sheet charge concentration, gate capacitance and threshold voltage have been derived. The gate capacitance behaviour is studied in terms of capacitance-voltage (CV) characteristics. Similarly, the predicted threshold voltage ( V T) is analysed by varying barrier thickness and oxide thickness. The positive V T obtained for a very thin 3 nm AlGaN barrier layer enables the enhancement mode operation of the MOSHEMT. These devices, along with depletion mode devices, are basic constituents of cascode configuration in power electronic circuits. The expressions developed are used in conventional long-channel HEMT drain current equation and evaluated to obtain different DC characteristics. The obtained results are compared with experimental data taken from literature which show good agreement and hence endorse the proposed model.

  5. Threshold of coexistence and critical behavior of a predator-prey stochastic model in a fractal landscape

    NASA Astrophysics Data System (ADS)

    Argolo, C.; Barros, P.; Tomé, T.; Arashiro, E.; Gleria, Iram; Lyra, M. L.

    2016-08-01

    We investigate a stochastic lattice model describing a predator-prey system in a fractal scale-free landscape, mimicked by the fractal Sierpinski carpet. We determine the threshold of species coexistence, that is, the critical phase boundary related to the transition between an active state, where both species coexist and an absorbing state where one of the species is extinct. We show that the predators must live longer in order to persist in a fractal habitat. We further performed a finite-size scaling analysis in the vicinity of the absorbing-state phase transition to compute a set of stationary and dynamical critical exponents. Our results indicate that the transition belongs to the directed percolation universality class exhibited by the usual contact process model on the same fractal landscape.

  6. Effect of Canagliflozin on Renal Threshold for Glucose, Glycemia, and Body Weight in Normal and Diabetic Animal Models

    PubMed Central

    Liang, Yin; Arakawa, Kenji; Ueta, Kiichiro; Matsushita, Yasuaki; Kuriyama, Chiaki; Martin, Tonya; Du, Fuyong; Liu, Yi; Xu, June; Conway, Bruce; Conway, Jamie; Polidori, David; Ways, Kirk; Demarest, Keith

    2012-01-01

    Background Canagliflozin is a sodium glucose co-transporter (SGLT) 2 inhibitor in clinical development for the treatment of type 2 diabetes mellitus (T2DM). Methods 14C-alpha-methylglucoside uptake in Chinese hamster ovary-K cells expressing human, rat, or mouse SGLT2 or SGLT1; 3H-2-deoxy-d-glucose uptake in L6 myoblasts; and 2-electrode voltage clamp recording of oocytes expressing human SGLT3 were analyzed. Graded glucose infusions were performed to determine rate of urinary glucose excretion (UGE) at different blood glucose (BG) concentrations and the renal threshold for glucose excretion (RTG) in vehicle or canagliflozin-treated Zucker diabetic fatty (ZDF) rats. This study aimed to characterize the pharmacodynamic effects of canagliflozin in vitro and in preclinical models of T2DM and obesity. Results Treatment with canagliflozin 1 mg/kg lowered RTG from 415±12 mg/dl to 94±10 mg/dl in ZDF rats while maintaining a threshold relationship between BG and UGE with virtually no UGE observed when BG was below RTG. Canagliflozin dose-dependently decreased BG concentrations in db/db mice treated acutely. In ZDF rats treated for 4 weeks, canagliflozin decreased glycated hemoglobin (HbA1c) and improved measures of insulin secretion. In obese animal models, canagliflozin increased UGE and decreased BG, body weight gain, epididymal fat, liver weight, and the respiratory exchange ratio. Conclusions Canagliflozin lowered RTG and increased UGE, improved glycemic control and beta-cell function in rodent models of T2DM, and reduced body weight gain in rodent models of obesity. PMID:22355316

  7. Threshold-like complexation of conjugated polymers with small molecule acceptors in solution within the neighbor-effect model.

    PubMed

    Sosorev, Andrey Yu; Parashchuk, Olga D; Zapunidi, Sergey A; Kashtanov, Grigoriy S; Golovnin, Ilya V; Kommanaboyina, Srikanth; Perepichka, Igor F; Paraschuk, Dmitry Yu

    2016-02-14

    In some donor-acceptor blends based on conjugated polymers, a pronounced charge-transfer complex (CTC) forms in the electronic ground state. In contrast to small-molecule donor-acceptor blends, the CTC concentration in polymer:acceptor solution can increase with the acceptor content in a threshold-like way. This threshold-like behavior was earlier attributed to the neighbor effect (NE) in the polymer complexation, i.e., next CTCs are preferentially formed near the existing ones; however, the NE origin is unknown. To address the factors affecting the NE, we record the optical absorption data for blends of the most studied conjugated polymers, poly(2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene) (MEH-PPV) and poly(3-hexylthiophene) (P3HT), with electron acceptors of fluorene series, 1,8-dinitro-9,10-antraquinone (), and 7,7,8,8-tetracyanoquinodimethane () in different solvents, and then analyze the data within the NE model. We have found that the NE depends on the polymer and acceptor molecular skeletons and solvent, while it does not depend on the acceptor electron affinity and polymer concentration. We conclude that the NE operates within a single macromolecule and stems from planarization of the polymer chain involved in the CTC with an acceptor molecule; as a result, the probability of further complexation with the next acceptor molecules at the adjacent repeat units increases. The steric and electronic microscopic mechanisms of NE are discussed.

  8. Population-based study on infant mortality.

    PubMed

    Lima, Jaqueline Costa; Mingarelli, Alexandre Marchezoni; Segri, Neuber José; Zavala, Arturo Alejandro Zavala; Takano, Olga Akiko

    2017-03-01

    Although Brazil has reduced social, economic and health indicators disparities in the last decade, intra- and inter-regional differences in child mortality rates (CMR) persist in regions such as the state capital of Mato Grosso. This population-based study aimed to investigate factors associated with child mortality in five cohorts of live births (LB) of mothers living in Cuiabá (MT), Brazil, 2006-2010, through probabilistic linkage in 47,018 LB. We used hierarchical logistic regression analysis. Of the 617 child deaths, 48% occurred in the early neonatal period. CMR ranged from 14.6 to 12.0 deaths per thousand LB. The following remained independently associated with death: mothers without companion (OR = 1.32); low number of prenatal consultations (OR = 1.65); low birthweight (OR = 4.83); prematurity (OR = 3.05); Apgar ≤ 7 at the first minute (OR = 3.19); Apgar ≤ 7 at the fifth minute (OR = 4.95); congenital malformations (OR = 14.91) and male gender (OR = 1.26). CMR has declined in Cuiabá, however, there is need to guide public healthcare policies in the prenatal and perinatal period to reduce early neonatal mortality and further studies to identify the causes of preventable deaths.

  9. Crossing the Threshold Mindfully: Exploring Rites of Passage Models in Adventure Therapy

    ERIC Educational Resources Information Center

    Norris, Julian

    2011-01-01

    Rites of passage models, drawing from ethnographic descriptions of ritualized transition, are widespread in adventure therapy programmes. However, critical literature suggests that: (a) contemporary rites of passage models derive from a selective and sometimes misleading use of ethnographic materials, and (b) the appropriation of initiatory…

  10. Hydrodynamics of sediment threshold

    NASA Astrophysics Data System (ADS)

    Ali, Sk Zeeshan; Dey, Subhasish

    2016-07-01

    A novel hydrodynamic model for the threshold of cohesionless sediment particle motion under a steady unidirectional streamflow is presented. The hydrodynamic forces (drag and lift) acting on a solitary sediment particle resting over a closely packed bed formed by the identical sediment particles are the primary motivating forces. The drag force comprises of the form drag and form induced drag. The lift force includes the Saffman lift, Magnus lift, centrifugal lift, and turbulent lift. The points of action of the force system are appropriately obtained, for the first time, from the basics of micro-mechanics. The sediment threshold is envisioned as the rolling mode, which is the plausible mode to initiate a particle motion on the bed. The moment balance of the force system on the solitary particle about the pivoting point of rolling yields the governing equation. The conditions of sediment threshold under the hydraulically smooth, transitional, and rough flow regimes are examined. The effects of velocity fluctuations are addressed by applying the statistical theory of turbulence. This study shows that for a hindrance coefficient of 0.3, the threshold curve (threshold Shields parameter versus shear Reynolds number) has an excellent agreement with the experimental data of uniform sediments. However, most of the experimental data are bounded by the upper and lower limiting threshold curves, corresponding to the hindrance coefficients of 0.2 and 0.4, respectively. The threshold curve of this study is compared with those of previous researchers. The present model also agrees satisfactorily with the experimental data of nonuniform sediments.

  11. Combining regional estimation and historical floods: A multivariate semiparametric peaks-over-threshold model with censored data

    NASA Astrophysics Data System (ADS)

    Sabourin, Anne; Renard, Benjamin

    2015-12-01

    The estimation of extreme flood quantiles is challenging due to the relative scarcity of extreme data compared to typical target return periods. Several approaches have been developed over the years to face this challenge, including regional estimation and the use of historical flood data. This paper investigates the combination of both approaches using a multivariate peaks-over-threshold model that allows estimating altogether the intersite dependence structure and the marginal distributions at each site. The joint distribution of extremes at several sites is constructed using a semiparametric Dirichlet Mixture model. The existence of partially missing and censored observations (historical data) is accounted for within a data augmentation scheme. This model is applied to a case study involving four catchments in Southern France, for which historical data are available since 1604. The comparison of marginal estimates from four versions of the model (with or without regionalizing the shape parameter; using or ignoring historical floods) highlights significant differences in terms of return level estimates. Moreover, the availability of historical data on several nearby catchments allows investigating the asymptotic dependence properties of extreme floods. Catchments display a significant amount of asymptotic dependence, calling for adapted multivariate statistical models.

  12. Deep sub-threshold Ξ and Λ production in nuclear collisions with the UrQMD transport model

    NASA Astrophysics Data System (ADS)

    Graef, G.; Steinheimer, J.; Li, F.; Bleicher, M.

    2014-12-01

    We present results on deep sub-threshold hyperon production in nuclear collisions, with the UrQMD transport model. Introducing anti-kaon+baryon and hyperon + hyperon strangeness exchange reactions we obtain a good description of experimental data on single strange hadron production in Ar+KCl reactions at Elab=1.76 A GeV. We find that the hyperon strangeness exchange is the dominant process contributing to the Ξ- yield; however, our study remains short of explaining the Ξ-/Λ ratio measured with the HADES experiment. We also discuss possible reasons for the discrepancy with previous studies and the experimental results, finding that many details of the transport simulation may have significant effects on the final Ξ- yield.

  13. Dcx reexpression reduces subcortical band heterotopia and seizure threshold in an animal model of neuronal migration disorder.

    PubMed

    Manent, Jean-Bernard; Wang, Yu; Chang, Yoonjeung; Paramasivam, Murugan; LoTurco, Joseph J

    2009-01-01

    Disorders of neuronal migration can lead to malformations of the cerebral neocortex that greatly increase the risk of seizures. It remains untested whether malformations caused by disorders in neuronal migration can be reduced by reactivating cellular migration and whether such repair can decrease seizure risk. Here we show, in a rat model of subcortical band heterotopia (SBH) generated by in utero RNA interference of the Dcx gene, that aberrantly positioned neurons can be stimulated to migrate by reexpressing Dcx after birth. Restarting migration in this way both reduces neocortical malformations and restores neuronal patterning. We further find that the capacity to reduce SBH continues into early postnatal development. Moreover, intervention after birth reduces the convulsant-induced seizure threshold to a level similar to that in malformation-free controls. These results suggest that disorders of neuronal migration may be eventually treatable by reengaging developmental programs both to reduce the size of cortical malformations and to reduce seizure risk.

  14. Effect of rescattering potential on the high-energy above-threshold ionization of a model-H atom

    NASA Astrophysics Data System (ADS)

    Chen, J.-H.; Wang, G.-L.; Zhang, Z.-R.; Zhao, S.-F.

    2017-01-01

    The high-energy above-threshold ionization of a model-H atom (with 1s state and the same binding energy as H atom) in a few-cycle laser pulse is investigated by using the improved strong-field approximation (ISFA), where the spherical shell potential is used as the rescattering potential. The results obtained from numerically solving time-dependent Schrödinger equation(TDSE) are regarded as the benchmark results. Our results show that the energy distributions in high-energy region obtained from ISFA calculations using the spherical shell potential may either match or be better than those from ISFA using Yukawa potential and zero-range potential in the laser with wavelengths of 800 and 1200 nm. In addition, the influence of the rescattering potential on the density of probability at different ejection angles is also discussed in this paper.

  15. Optimising threshold levels for information transmission in binary threshold networks: Independent multiplicative noise on each threshold

    NASA Astrophysics Data System (ADS)

    Zhou, Bingchang; McDonnell, Mark D.

    2015-02-01

    The problem of optimising the threshold levels in multilevel threshold system subject to multiplicative Gaussian and uniform noise is considered. Similar to previous results for additive noise, we find a bifurcation phenomenon in the optimal threshold values, as the noise intensity changes. This occurs when the number of threshold units is greater than one. We also study the optimal thresholds for combined additive and multiplicative Gaussian noise, and find that all threshold levels need to be identical to optimise the system when the additive noise intensity is a constant. However, this identical value is not equal to the signal mean, unlike the case of additive noise. When the multiplicative noise intensity is instead held constant, the optimal threshold levels are not all identical for small additive noise intensity but are all equal to zero for large additive noise intensity. The model and our results are potentially relevant for sensor network design and understanding neurobiological sensory neurons such as in the peripheral auditory system.

  16. Performance of the SWEEP model affected by estimates of threshold friction velocity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Wind Erosion Prediction System (WEPS) is a process-based model and needs to be verified under a broad range of climatic, soil, and management conditions. Occasional failure of the WEPS erosion submodel (Single-event Wind Erosion Evaluation Program or SWEEP) to simulate erosion in the Columbia Pl...

  17. Towards an Epistemically Neutral Curriculum Model for Vocational Education: From Competencies to Threshold Concepts and Practices

    ERIC Educational Resources Information Center

    Hodge, Steven; Atkins, Liz; Simons, Michele

    2016-01-01

    Debate about the benefits and problems with competency-based training (CBT) has not paid sufficient attention to the fact that the model satisfies a unique, contemporary demand for cross-occupational curriculum. The adoption of CBT in the UK and Australia, along with at least some of its problems, can be understood in terms of this demand. We…

  18. Identifying Atomic Structure as a Threshold Concept: Student Mental Models and Troublesomeness

    ERIC Educational Resources Information Center

    Park, Eun Jung; Light, Gregory

    2009-01-01

    Atomic theory or the nature of matter is a principal concept in science and science education. This has, however, been complicated by the difficulty students have in learning the concept and the subsequent construction of many alternative models. To understand better the conceptual barriers to learning atomic structure, this study explores the…

  19. Threshold effects in nonlinear models with an application to the social capital-retirement-health relationship.

    PubMed

    Gannon, Brenda; Harris, David; Harris, Mark

    2014-09-01

    This paper considers the relationship between social capital and health in the years before, at and after retirement. This adds to the current literature that only investigates this relationship in either the population as a whole or two subpopulations, pre-retirement and post-retirement. We now investigate if there are further additional subpopulations in the years to and from retirement. We take an information criteria approach to select the optimal model of subpopulations from a full range of potential models. This approach is similar to that proposed for linear models. Our contribution is to show how this may also be applied to nonlinear models and without the need for estimating subsequent subpopulations conditional on previous fixed subpopulations. Our main finding is that the association of social capital with health diminishes at retirement, and this decreases further 10 years after retirement. We find a strong positive significant association of social capital with health, although this turns negative after 20 years, indicating potential unobserved heterogeneity. The types of social capital may differ in later years (e.g., less volunteering) and hence overall social capital may have less of an influence on health in later years.

  20. Bayesian approach to color-difference models based on threshold and constant-stimuli methods.

    PubMed

    Brusola, Fernando; Tortajada, Ignacio; Lengua, Ismael; Jordá, Begoña; Peris, Guillermo

    2015-06-15

    An alternative approach based on statistical Bayesian inference is presented to deal with the development of color-difference models and the precision of parameter estimation. The approach was applied to simulated data and real data, the latter published by selected authors involved with the development of color-difference formulae using traditional methods. Our results show very good agreement between the Bayesian and classical approaches. Among other benefits, our proposed methodology allows one to determine the marginal posterior distribution of each random individual parameter of the color-difference model. In this manner, it is possible to analyze the effect of individual parameters on the statistical significance calculation of a color-difference equation.

  1. Computational modeling of interventions and protective thresholds to prevent disease transmission in deploying populations.

    PubMed

    Burgess, Colleen; Peace, Angela; Everett, Rebecca; Allegri, Buena; Garman, Patrick

    2014-01-01

    Military personnel are deployed abroad for missions ranging from humanitarian relief efforts to combat actions; delay or interruption in these activities due to disease transmission can cause operational disruptions, significant economic loss, and stressed or exceeded military medical resources. Deployed troops function in environments favorable to the rapid and efficient transmission of many viruses particularly when levels of protection are suboptimal. When immunity among deployed military populations is low, the risk of vaccine-preventable disease outbreaks increases, impacting troop readiness and achievement of mission objectives. However, targeted vaccination and the optimization of preexisting immunity among deployed populations can decrease the threat of outbreaks among deployed troops. Here we describe methods for the computational modeling of disease transmission to explore how preexisting immunity compares with vaccination at the time of deployment as a means of preventing outbreaks and protecting troops and mission objectives during extended military deployment actions. These methods are illustrated with five modeling case studies for separate diseases common in many parts of the world, to show different approaches required in varying epidemiological settings.

  2. Computational Modeling of Interventions and Protective Thresholds to Prevent Disease Transmission in Deploying Populations

    PubMed Central

    2014-01-01

    Military personnel are deployed abroad for missions ranging from humanitarian relief efforts to combat actions; delay or interruption in these activities due to disease transmission can cause operational disruptions, significant economic loss, and stressed or exceeded military medical resources. Deployed troops function in environments favorable to the rapid and efficient transmission of many viruses particularly when levels of protection are suboptimal. When immunity among deployed military populations is low, the risk of vaccine-preventable disease outbreaks increases, impacting troop readiness and achievement of mission objectives. However, targeted vaccination and the optimization of preexisting immunity among deployed populations can decrease the threat of outbreaks among deployed troops. Here we describe methods for the computational modeling of disease transmission to explore how preexisting immunity compares with vaccination at the time of deployment as a means of preventing outbreaks and protecting troops and mission objectives during extended military deployment actions. These methods are illustrated with five modeling case studies for separate diseases common in many parts of the world, to show different approaches required in varying epidemiological settings. PMID:25009579

  3. Finger Vein Segmentation from Infrared Images Based on a Modified Separable Mumford Shah Model and Local Entropy Thresholding.

    PubMed

    Vlachos, Marios; Dermatas, Evangelos

    2015-01-01

    A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern.

  4. Modeling the impact of spinal cord stimulation paddle lead position on impedance, stimulation threshold, and activation region.

    PubMed

    Xiaoyi Min; Kent, Alexander R

    2015-08-01

    The effectiveness of spinal cord stimulation (SCS) for chronic pain treatment depends on selection of appropriate stimulation settings, which can be especially challenging following posture change or SCS lead migration. The objective of this work was to investigate the feasibility of using SCS lead impedance for determining the location of a SCS lead and for detecting lead migration, as well as the impact of axial movement and rotation of the St. Jude Medical PENTA™ paddle in the dorsal-ventral or medial-lateral directions on dorsal column (DC) stimulation thresholds and neural activation regions. We used a two-stage computational model, including a finite element method model of field potentials in the spinal cord during stimulation, coupled to a biophysical cable model of mammalian, myelinated nerve fibers to calculate tissue impedance and nerve fiber activation within the DC. We found that SCS lead impedance was highly sensitive to the distance between the lead and cerebrospinal fluid (CSF) layer. In addition, among all the lead positions studied, medial-lateral movement resulted in the most substantial changes to SC activation regions. These results suggest that impedance can be used for detecting paddle position and lead migration, and therefore for guiding SCS programming.

  5. Pattern of trauma determines the threshold for epileptic activity in a model of cortical deafferentation

    PubMed Central

    Volman, Vladislav; Bazhenov, Maxim; Sejnowski, Terrence J.

    2011-01-01

    Epileptic activity often occurs in the cortex after a latent period after head trauma; this delay has been attributed to the destabilizing influence of homeostatic synaptic scaling and changes in intrinsic properties. However, the impact of the spatial organization of cortical trauma on epileptogenesis is poorly understood. We addressed this question by analyzing the dynamics of a large-scale biophysically realistic cortical network model subjected to different patterns of trauma. Our results suggest that the spatial pattern of trauma can greatly affect the propensity for developing posttraumatic epileptic activity. For the same fraction of lesioned neurons, spatially compact trauma resulted in stronger posttraumatic elevation of paroxysmal activity than spatially diffuse trauma. In the case of very severe trauma, diffuse distribution of a small number of surviving intact neurons alleviated posttraumatic epileptogenesis. We suggest that clinical evaluation of the severity of brain trauma should take into account the spatial pattern of the injured cortex. PMID:21896754

  6. Quasi-two-dimensional threshold voltage model for junctionless cylindrical surrounding gate metal-oxide-semiconductor field-effect transistor with dual-material gate

    NASA Astrophysics Data System (ADS)

    Li, Cong; Zhuang, Yi-Qi; Zhang, Li; Jin, Gang

    2014-01-01

    Based on the quasi-two-dimensional (2D) solution of Poisson's equation in two continuous channel regions, an analytical threshold voltage model for short-channel junctionless dual-material cylindrical surrounding-gate (JLDMCSG) metal-oxide-semiconductor field-effect transistor (MOSFET) is developed. Using the derived model, channel potential distribution, horizontal electrical field distribution, and threshold voltage roll-off of JLDMCSG MOSFET are investigated. Compared with junctionless single-material CSG (JLSGCSG) MOSFET, JLDMCSG MOSFET can effectively suppress short-channel effects and simultaneously improve carrier transport efficiency. It is also revealed that threshold voltage roll-off of JLDMCSG can be significantly reduced by adopting both a small oxide thickness and a small silicon channel radius. The model is verified by comparing its calculated results with that obtained from three-dimensional (3D) numerical device simulator ISE.

  7. Drought Risk Modeling for Thermoelectric Power Plants Siting using an Excess Over Threshold Approach

    SciTech Connect

    Bekera, Behailu B; Francis, Royce A; Omitaomu, Olufemi A

    2014-01-01

    Water availability is among the most important elements of thermoelectric power plant site selection and evaluation criteria. With increased variability and changes in hydrologic statistical stationarity, one concern is the increased occurrence of extreme drought events that may be attributable to climatic changes. As hydrological systems are altered, operators of thermoelectric power plants need to ensure a reliable supply of water for cooling and generation requirements. The effects of climate change are expected to influence hydrological systems at multiple scales, possibly leading to reduced efficiency of thermoelectric power plants. In this paper, we model drought characteristics from a thermoelectric systems operational and regulation perspective. A systematic approach to characterise a stream environment in relation to extreme drought occurrence, duration and deficit-volume is proposed and demonstrated. This approach can potentially enhance early stage decisions in identifying candidate sites for a thermoelectric power plant application and allow investigation and assessment of varying degrees of drought risk during more advanced stages of the siting process.

  8. Elaborating on threshold concepts

    NASA Astrophysics Data System (ADS)

    Rountree, Janet; Robins, Anthony; Rountree, Nathan

    2013-09-01

    We propose an expanded definition of Threshold Concepts (TCs) that requires the successful acquisition and internalisation not only of knowledge, but also its practical elaboration in the domains of applied strategies and mental models. This richer definition allows us to clarify the relationship between TCs and Fundamental Ideas, and to account for both the important and the problematic characteristics of TCs in terms of the Knowledge/Strategies/Mental Models Framework defined in previous work.

  9. Using the product threshold model for estimating separately the effect of temperature on male and female fertility.

    PubMed

    Tusell, L; David, I; Bodin, L; Legarra, A; Rafel, O; López-Bejar, M; Piles, M

    2011-12-01

    Animals under environmental thermal stress conditions have reduced fertility due to impairment of some mechanisms involved in their reproductive performance that are different in males and females. As a consequence, the most sensitive periods of time and the magnitude of effect of temperature on fertility can differ between sexes. The objective of this study was to estimate separately the effect of temperature in different periods around the insemination time on male and on female fertility by using the product threshold model. This model assumes that an observed reproduction outcome is the result of the product of 2 unobserved variables corresponding to the unobserved fertilities of the 2 individuals involved in the mating. A total of 7,625 AI records from rabbits belonging to a line selected for growth rate and indoor daily temperature records were used. The average maximum daily temperature and the proportion of days in which the maximum temperature was greater than 25°C were used as temperature descriptors. These descriptors were calculated for several periods around the day of AI. In the case of males, 4 periods of time covered different stages of the spermatogenesis, the transit through the epididymus of the sperm, and the day of AI. For females, 5 periods of time covered the phases of preovulatory follicular maturation including day of AI and ovulation, fertilization and peri-implantational stage of the embryos, embryonic and early fetal periods of gestation, and finally, late gestation until birth. The effect of the different temperature descriptors was estimated in the corresponding male and female liabilities in a set of threshold product models. The temperature of the day of AI seems to be the most relevant temperature descriptor affecting male fertility because greater temperature records on the day of AI caused a decrease in male fertility (-6% in male fertility rate with respect to thermoneutrality). Departures from the thermal zone in temperature

  10. Threshold dose for peripheral neuropathy following intraoperative radiotherapy (IORT) in a large animal model

    SciTech Connect

    Kinsella, T.J.; DeLuca, A.M.; Barnes, M.; Anderson, W.; Terrill, R.; Sindelar, W.F. )

    1991-04-01

    Radiation injury to peripheral nerve is a dose-limiting toxicity in the clinical application of intraoperative radiotherapy, particularly for pelvic and retroperitoneal tumors. Intraoperative radiotherapy-related peripheral neuropathy in humans receiving doses of 20-25 Gy is manifested as a mixed motor-sensory deficit beginning 6-9 months following treatment. In a previous experimental study of intraoperative radiotherapy-related neuropathy of the lumbro-sacral plexus, an approximate inverse linear relationship was reported between the intraoperative dose (20-75 Gy range) and the time to onset of hind limb paresis (1-12 mos following intraoperative radiotherapy). The principal histological lesion in irradiated nerve was loss of large nerve fibers and perineural fibrosis without significant vascular injury. Similar histological changes in irradiated nerves were found in humans. To assess peripheral nerve injury to lower doses of intraoperative radiotherapy in this same large animal model, groups of four adult American Foxhounds received doses of 10, 15, or 20 Gy to the right lumbro-sacral plexus and sciatic nerve using 9 MeV electrons. The left lumbro-sacral plexus and sciatic nerve were excluded from the intraoperative field to allow each animal to serve as its own control. Following treatment, a complete neurological exam, electromyogram, and nerve conduction studies were performed monthly for 1 year. Monthly neurological exams were performed in years 2 and 3 whereas electromyogram and nerve conduction studies were performed every 3 months during this follow-up period. With follow-up of greater than or equal to 42 months, no dog receiving 10 or 15 Gy IORT shows any clinical or laboratory evidence of peripheral nerve injury. However, all four dogs receiving 20 Gy developed right hind limb paresis at 8, 9, 9, and 12 mos following intraoperative radiotherapy.

  11. Learning foraging thresholds for lizards

    SciTech Connect

    Goldberg, L.A.; Hart, W.E.; Wilson, D.B.

    1996-01-12

    This work gives a proof of convergence for a randomized learning algorithm that describes how anoles (lizards found in the Carribean) learn a foraging threshold distance. This model assumes that an anole will pursue a prey if and only if it is within this threshold of the anole`s perch. This learning algorithm was proposed by the biologist Roughgarden and his colleagues. They experimentally confirmed that this algorithm quickly converges to the foraging threshold that is predicted by optimal foraging theory our analysis provides an analytic confirmation that the learning algorithm converses to this optimal foraging threshold with high probability.

  12. Subdural haemorrhages in infants: population based study

    PubMed Central

    Jayawant, S; Rawlinson, A; Gibbon, F; Price, J; Schulte, J; Sharples, P; Sibert, J R; Kemp, A M

    1998-01-01

    Objectives To identify the incidence, clinical outcome, and associated factors of subdural haemorrhage in children under 2 years of age, and to determine how such cases were investigated and how many were due to child abuse. Design Population based case series. Setting South Wales and south west England. Subjects Children under 2 years of age who had a subdural haemorrhage. We excluded neonates who developed subdural haemorrhage during their stay on a neonatal unit and infants who developed a subdural haemorrhage after infection or neurosurgical intervention. Main outcome measures Incidence and clinical outcome of subdural haemorrhage in infants, the number of cases caused by child abuse, the investigations such children received, and associated risk factors. Results Thirty three children (23 boys and 10 girls) were identified with subdural haemorrhage. The incidence was 12.8/100 000 children/year (95% confidence interval 5.4 to 20.2). Twenty eight cases (85%) were under 1 year of age. The incidence of subdural haemorrhage in children under 1 year of age was 21.0/100 000 children/year and was therefore higher than in the older children. The clinical outcome was poor: nine infants died and 15 had profound disability. Only 22 infants had the basic investigations of a full blood count, coagulation screen, computed tomography or magnetic resonance imaging, skeletal survey or bone scan, and ophthalmological examination. In retrospect, 27 cases (82%) were highly suggestive of abuse. Conclusion Subdural haemorrhage is common in infancy and carries a poor prognosis; three quarters of such infants die or have profound disability. Most cases are due to child abuse, but in a few the cause is unknown. Some children with subdural haemorrhage do not undergo appropriate investigations. We believe the clinical investigation of such children should include a full multidisciplinary social assessment, an ophthalmic examination, a skeletal survey supplemented with a bone scan or a

  13. Genetic threshold hypothesis of neocortical spike-and-wave discharges in the rat: An animal model of petit mal epilepsy

    SciTech Connect

    Vadasz, C.; Fleischer, A.; Carpi, D.; Jando, G.

    1995-02-27

    Neocortical high-voltage spike-and-wave discharges (HVS) in the rat are an animal model of petit mal epilepsy. Genetic analysis of total duration of HVS (s/12 hr) in reciprocal F1 and F2 hybrids of F344 and BN rats indicated that the phenotypic variability of HVS cannot be explained by simple, monogenic Mendelian model. Biometrical analysis suggested the presence of additive, dominance, and sex-linked-epistatic effects, buffering maternal influence, and heterosis. High correlation was observed between average duration (s/episode) and frequency of occurrence of spike-and-wave episodes (n/12 hr) in parental and segregating generations, indicating that common genes affect both duration and frequency of the spike-and-wave pattern. We propose that both genetic and developmental - environmental factors control an underlying quantitative variable, which, above a certain threshold level, precipitates HVS discharges. These findings, together with the recent availability of rat DNA markers for total genome mapping, pave the way to the identification of genes that control the susceptibility of the brain to spike-and-wave discharges. 67 refs., 3 figs., 5 tabs.

  14. Genetic threshold hypothesis of neocortical spike-and-wave discharges in the rat: an animal model of petit mal epilepsy.

    PubMed

    Vadász, C; Carpi, D; Jando, G; Kandel, A; Urioste, R; Horváth, Z; Pierre, E; Vadi, D; Fleischer, A; Buzsáki, G

    1995-02-27

    Neocortical high-voltage spike-and-wave discharges (HVS) in the rat are an animal model of petit mal epilepsy. Genetic analysis of total duration of HVS (s/12 hr) in reciprocal F1 and F2 hybrids of F344 and BN rats indicated that the phenotypic variability of HVS cannot be explained by a simple, monogenic Mendelian model. Biometrical analysis suggested the presence of additive, dominance, and sex-linked-epistatic effects, buffering maternal influence, and heterosis. High correlation was observed between average duration (s/episode) and frequency of occurrence of spike-and-wave episodes (n/12 hr) in parental and segregating generations, indicating that common genes affect both duration and frequency of the spike-and-wave pattern. We propose that both genetic and developmental-environmental factors control an underlying quantitative variable, which, above a certain threshold level, precipitates HVS discharges. These findings, together with the recent availability of rat DNA markers for total genome mapping, pave the way to the identification of genes that control the susceptibility of the brain to spike-and-wave discharges.

  15. Numerical Analysis of Threshold between Laser-Supported Detonation and Combustion Wave Using Thermal Non-Equilibrium and Multi-Charged Ionization Model

    NASA Astrophysics Data System (ADS)

    Shiraishi, Hiroyuki; Kumagai, Yuya

    Laser-supported Detonation (LSD), which is one type of Laser-supported Plasma (LSP), is an important phenomenon because it can generate high pressures and temperatures for laser absorption. In this study, using thermal-non-equilibrium model, we numerically simulate LSPs, which are categorized as either LSDs or laser-supported combustion-waves (LSCs). For the analysis model, a two-temperature (heavy particle and electron-temperature) model has been used because the electronic mode excites first in laser absorption and a thermal non-equilibrium state easily arises. In the numerical analysis of the LSDs, laser absorption models are particularly important. Therefore, a multi-charged ionization model is considered to evaluate precisely the propagation and the structure transition of the LSD waves in the proximity of the LSC-LSD threshold. In the new model, the transition of the LSD construction near the threshold, which is indicated by the ionization delay length, becomes more practical.

  16. A semi-analytic power balance model for low (L) to high (H) mode transition power threshold

    SciTech Connect

    Singh, R.; Jhang, Hogun; Kaw, P. K.; Diamond, P. H.; Nordman, H.; Bourdelle, C.

    2014-06-15

    We present a semi-analytic model for low (L) to high (H) mode transition power threshold (P{sub th}). Two main assumptions are made in our study. First, high poloidal mode number drift resistive ballooning modes (high-m DRBM) are assumed to be the dominant turbulence driver in a narrow edge region near to last closed flux surface. Second, the pre-transition edge profile and turbulent diffusivity at the narrow edge region pertain to turbulent equipartition. An edge power balance relation is derived by calculating the dissipated power flux through both turbulent conduction and convection, and radiation in the edge region. P{sub th} is obtained by imposing the turbulence quench rule due to sheared E × B rotation. Evaluation of P{sub th} shows a good agreement with experimental results in existing machines. Increase of P{sub th} at low density (i.e., the existence of roll-over density in P{sub th} vs. density) is shown to originate from the longer scale length of the density profile than that of the temperature profile.

  17. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  18. The simcyp population based simulator: architecture, implementation, and quality assurance.

    PubMed

    Jamei, Masoud; Marciniak, Steve; Edwards, Duncan; Wragg, Kris; Feng, Kairui; Barnett, Adrian; Rostami-Hodjegan, Amin

    2013-01-01

    Developing a user-friendly platform that can handle a vast number of complex physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) models both for conventional small molecules and larger biologic drugs is a substantial challenge. Over the last decade the Simcyp Population Based Simulator has gained popularity in major pharmaceutical companies (70% of top 40 - in term of R&D spending). Under the Simcyp Consortium guidance, it has evolved from a simple drug-drug interaction tool to a sophisticated and comprehensive Model Based Drug Development (MBDD) platform that covers a broad range of applications spanning from early drug discovery to late drug development. This article provides an update on the latest architectural and implementation developments within the Simulator. Interconnection between peripheral modules, the dynamic model building process and compound and population data handling are all described. The Simcyp Data Management (SDM) system, which contains the system and drug databases, can help with implementing quality standards by seamless integration and tracking of any changes. This also helps with internal approval procedures, validation and auto-testing of the new implemented models and algorithms, an area of high interest to regulatory bodies.

  19. A threshold model for opposing actions of acetylcholine on reward behavior: Molecular mechanisms and implications for treatment of substance abuse disorders.

    PubMed

    Grasing, Kenneth

    2016-10-01

    The cholinergic system plays important roles in both learning and addiction. Medications that modify cholinergic tone can have pronounced effects on behaviors reinforced by natural and drug reinforcers. Importantly, enhancing the action of acetylcholine (ACh) in the nucleus accumbens and ventral tegmental area (VTA) dopamine system can either augment or diminish these behaviors. A threshold model is presented that can explain these seemingly contradictory results. Relatively low levels of ACh rise above a lower threshold, facilitating behaviors supported by drugs or natural reinforcers. Further increases in cholinergic tone that rise above a second upper threshold oppose the same behaviors. Accordingly, cholinesterase inhibitors, or agonists for nicotinic or muscarinic receptors, each have the potential to produce biphasic effects on reward behaviors. Pretreatment with either nicotinic or muscarinic antagonists can block drug- or food- reinforced behavior by maintaining cholinergic tone below its lower threshold. Potential threshold mediators include desensitization of nicotinic receptors and biphasic effects of ACh on the firing of medium spiny neurons. Nicotinic receptors with high- and low- affinity appear to play greater roles in reward enhancement and inhibition, respectively. Cholinergic inhibition of natural and drug rewards may serve as mediators of previously described opponent processes. Future studies should evaluate cholinergic agents across a broader range of doses, and include a variety of reinforced behaviors.

  20. Regional accumulation characteristics of cadmium in vegetables: Influencing factors, transfer model and indication of soil threshold content.

    PubMed

    Yang, Yang; Chen, Weiping; Wang, Meie; Peng, Chi

    2016-12-01

    A regional investigation in the Youxian prefecture, southern China, was conducted to analyze the impact of environmental factors including soil properties and irrigation in conjunction with the use of fertilizers on the accumulation of Cd in vegetables. The Cd transfer potential from soil to vegetable was provided by the plant uptake factor (PUF), which varied by three orders of magnitude and was described by a Gaussian distribution model. The soil pH, content of soil organic matter (SOM), concentrations of Zn in the soil, pH of irrigation water and nitrogenous fertilizers contributed significantly to the PUF variations. A path model analysis, however, revealed the principal control of the PUF values resulted from the soil pH, soil Zn concentrations and SOM. Transfer functions were developed using the total soil Cd concentrations, soil pH, and SOM. They explained 56% of the variance for all samples irrespective of the vegetable genotypes. The transfer functions predicted the probability of exceeding China food safety standard concentrations for Cd in four major consumable vegetables under different soil conditions. Poor production practices in the study area involved usage of soil with pH values ≤ 5.5, especially for the cultivation of Raphanus sativus L., even with soil Cd concentrations below the China soil quality standard. We found the soil standard Cd concentrations for cultivating vegetables was not strict enough for strongly acidic (pH ≤ 5.5) and SOM-poor (SOM ≤ 10 g kg(-1)) soils present in southern China. It is thus necessary to address the effect of environmental variables to generate a suitable Cd threshold for cultivated soils.

  1. The ethical dilemma of population-based medical decision making.

    PubMed

    Kirsner, R S; Federman, D G

    1998-11-01

    Over the past several years, there has been a growing interest in population-based medicine. Some elements in healthcare have used population-based medicine as a technique to decrease healthcare expenditures. However, in their daily practice of medicine, physicians must grapple with the question of whether they incorporate population-based medicine when making decisions for an individual patient. They therefore may encounter an ethical dilemma. Physicians must remember that the physician-patient relationship is of paramount importance and that even well-conducted research may not be applicable to an individual patient.

  2. Two-dimensional models of threshold voltage and subthreshold current for symmetrical double-material double-gate strained Si MOSFETs

    NASA Astrophysics Data System (ADS)

    Yan-hui, Xin; Sheng, Yuan; Ming-tang, Liu; Hong-xia, Liu; He-cai, Yuan

    2016-03-01

    The two-dimensional models for symmetrical double-material double-gate (DM-DG) strained Si (s-Si) metal-oxide semiconductor field effect transistors (MOSFETs) are presented. The surface potential and the surface electric field expressions have been obtained by solving Poisson’s equation. The models of threshold voltage and subthreshold current are obtained based on the surface potential expression. The surface potential and the surface electric field are compared with those of single-material double-gate (SM-DG) MOSFETs. The effects of different device parameters on the threshold voltage and the subthreshold current are demonstrated. The analytical models give deep insight into the device parameters design. The analytical results obtained from the proposed models show good matching with the simulation results using DESSIS. Project supported by the National Natural Science Foundation of China (Grant Nos. 61376099, 11235008, and 61205003).

  3. A Bayesian threshold-linear model evaluation of perinatal mortality, dystocia, birth weight, and gestation length in a Holstein herd.

    PubMed

    Johanson, J M; Berger, P J; Tsuruta, S; Misztal, I

    2011-01-01

    The objective of this research was to estimate genetic parameters for a multiple-trait evaluation of dystocia (DYS), perinatal mortality (PM), birth weight (BWT), and gestation length (GL) in Holsteins. The data included 5,712 calving records collected between 1968 and 2005 from the Iowa State University dairy breeding herd in Ankeny. The incidence of PM was 8.8% and that of DYS 28.8%; mean BWT was 40.5 kg, and GL was 279 d. A threshold-linear animal model included the effects of year, season, sex of calf, parity, sire group, direct genetic, maternal genetic, and maternal permanent environment. Direct heritabilities for DYS, PM, BWT, and GL were 0.11 (0.04), 0.13 (0.05), 0.26 (0.04), and 0.51 (0.05), respectively. Maternal heritabilities were 0.14 (0.04), 0.15 (0.03), 0.08 (0.01), and 0.08 (0.02), for DYS, PM, BWT, and GL, respectively. The heritabilities are the posterior means of the Gibbs samples with their standard deviations in parentheses. The direct genetic correlation between PM and DYS was estimated at 0.67 (0.19), whereas the maternal genetic correlation was 0.45 (0.16). Direct and maternal PM and DYS are partially controlled by the same genes. Selection on only calving ease is not sufficient to control PM. With moderate genetic correlations between all 4 traits, BWT and GL should be included with DYS and PM in an evaluation of calving performance.

  4. A Population-based Habitable Zone Perspective

    NASA Astrophysics Data System (ADS)

    Zsom, Andras

    2015-11-01

    What can we tell about exoplanet habitability if currently only the stellar properties, planet radius, and the incoming stellar flux are known? A planet is in the habitable zone (HZ) if it harbors liquid water on its surface. The HZ is traditionally conceived as a sharp region around stars because it is calculated for one planet with specific properties. Such an approach is limiting because the planet’s atmospheric and geophysical properties, which influence the presence of liquid water on the surface, are currently unknown but expected to be diverse. A statistical HZ description is outlined that does not favor one planet type. Instead, the stellar and planet properties are treated as random variables, and a continuous range of planet scenarios is considered. Various probability density functions are assigned to each random variable, and a combination of Monte Carlo sampling and climate modeling is used to generate synthetic exoplanet populations with known surface climates. Then, the properties of the subpopulation bearing liquid water is analyzed. Given our current observational knowledge, the HZ takes the form of a weakly constrained but smooth probability function. The HZ has an inner edge, but a clear outer edge is not seen. Currently only optimistic upper limits can be derived for the potentially observable HZ occurrence rate. Finally, we illustrate through an example how future data on exoplanet atmospheres will help to narrow down the probability that an exoplanet harbors liquid water, and we identify the greatest observational challenge in the way of finding a habitable exoplanet.

  5. Development of a new risk model for predicting cardiovascular events among hemodialysis patients: Population-based hemodialysis patients from the Japan Dialysis Outcome and Practice Patterns Study (J-DOPPS)

    PubMed Central

    Onishi, Yoshihiro; Fukuhara, Shunichi

    2017-01-01

    Background Cardiovascular (CV) events are the primary cause of death and becoming bedridden among hemodialysis (HD) patients. The Framingham risk score (FRS) is useful for predicting incidence of CV events in the general population, but is considerd to be unsuitable for the prediction of the incidence of CV events in HD patients, given their characteristics due to atypical relationships between conventional risk factors and outcomes. We therefore aimed to develop a new prognostic prediction model for prevention and early detection of CV events among hemodialysis patients. Methods We enrolled 3,601 maintenance HD patients based on their data from the Japan Dialysis Outcomes and Practice Patterns Study (J-DOPPS), phases 3 and 4. We longitudinaly assessed the association between several potential candidate predictors and composite CV events in the year after study initiation. Potential candidate predictors included the component factors of FRS and other HD-specific risk factors. We used multivariable logistic regression with backward stepwise selection to develop our new prediction model and generated a calibration plot. Additinially, we performed bootstrapping to assess the internal validity. Results We observed 328 composite CV events during 1-year follow-up. The final prediction model contained six variables: age, diabetes status, history of CV events, dialysis time per session, and serum phosphorus and albumin levels. The new model showed significantly better discrimination than the FRS, in both men (c-statistics: 0.76 for new model, 0.64 for FRS) and women (c-statistics: 0.77 for new model, 0.60 for FRS). Additionally, we confirmed the consistency between the observed results and predicted results using the calibration plot. Further, we found similar discrimination and calibration to the derivation model in the bootstrapping cohort. Conclusions We developed a new risk model consisting of only six predictors. Our new model predicted CV events more accurately than

  6. The asymmetric reactions of mean and volatility of stock returns to domestic and international information based on a four-regime double-threshold GARCH model

    NASA Astrophysics Data System (ADS)

    Chen, Cathy W. S.; Yang, Ming Jing; Gerlach, Richard; Jim Lo, H.

    2006-07-01

    In this paper, we investigate the asymmetric reactions of mean and volatility of stock returns in five major markets to their own local news and the US information via linear and nonlinear models. We introduce a four-regime Double-Threshold GARCH (DTGARCH) model, which allows asymmetry in both the conditional mean and variance equations simultaneously by employing two threshold variables, to analyze the stock markets’ reactions to different types of information (good/bad news) generated from the domestic markets and the US stock market. By applying the four-regime DTGARCH model, this study finds that the interaction between the information of domestic and US stock markets leads to the asymmetric reactions of stock returns and their variability. In addition, this research also finds that the positive autocorrelation reported in the previous studies of financial markets may in fact be mis-specified, and actually due to the local market's positive response to the US stock market.

  7. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  8. Using Generalized Additive Modeling to Empirically Identify Thresholds within the ITERS in Relation to Toddlers' Cognitive Development

    ERIC Educational Resources Information Center

    Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana

    2013-01-01

    Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints,…

  9. CARA Risk Assessment Thresholds

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  10. Experimental and Finite Element Modeling of Near-Threshold Fatigue Crack Growth for the K-Decreasing Test Method

    NASA Technical Reports Server (NTRS)

    Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.

    2015-01-01

    The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.

  11. Combined atomistic-continuum model for simulation of laser interaction with metals: application in the calculation of melting thresholds in Ni targets of varying thickness

    NASA Astrophysics Data System (ADS)

    Ivanov, D. S.; Zhigilei, L. V.

    The threshold laser fluence for the onset of surface melting is calculated for Ni films of different thicknesses and for a bulk Ni target using a combined atomistic-continuum computational model. The model combines the classical molecular dynamics (MD) method for simulation of non-equilibrium processes of lattice superheating and fast phase transformations with a continuum description of the laser excitation and subsequent relaxation of the conduction band electrons based on the two-temperature model (TTM). In the hybrid TTM-MD method, MD substitutes the TTM equation for the lattice temperature, and the diffusion equation for the electron temperature is solved simultaneously with MD integration of the equations of motion of atoms. The dependence of the threshold fluence on the film thickness predicted in TTM-MD simulations qualitatively agrees with TTM calculations, while the values of the thresholds for thick films and bulk targets are 10% higher in TTM-MD. The quantitative differences between the predictions of TTM and TTM-MD demonstrate that the kinetics of laser melting as well as the energy partitioning between the thermal energy of atomic vibrations and energy of the collective atomic motion driven by the relaxation of the laser-induced pressure should be taken into account in interpretation of experimental results on surface melting.

  12. Coloring geographical threshold graphs

    SciTech Connect

    Bradonjic, Milan; Percus, Allon; Muller, Tobias

    2008-01-01

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyze the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.

  13. Disease management to population-based health: steps in the right direction?

    PubMed

    Sprague, Lisa

    2003-05-16

    This issue brief reviews the evolution of the disease management model and the ways it relates to care coordination and case management approaches. It also looks at examples of population-based disease management programs operating in both the private and public sectors and reviews the evidence of their success. Finally, the paper considers the policy implications of adapting this model to a Medicare fee-for-service population.

  14. Evaluation of Bayesian estimation of a hidden continuous-time Markov chain model with application to threshold violation in water-quality indicators

    USGS Publications Warehouse

    Deviney, Frank A.; Rice, Karen; Brown, Donald E.

    2012-01-01

    Natural resource managers require information concerning  the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates.  An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.

  15. Increasing incidence of cataract surgery: Population-based study

    PubMed Central

    Gollogly, Heidrun E.; Hodge, David O.; St. Sauver, Jennifer L.; Erie, Jay C.

    2015-01-01

    PURPOSE To estimate the incidence of cataract surgery in a defined population and to determine longitudinal cataract surgery patterns. SETTING Mayo Clinic, Rochester, Minnesota, USA. DESIGN Cohort study. METHODS Rochester Epidemiology Project (REP) databases were used to identify all incident cataract surgeries in Olmsted County, Minnesota, between January 1, 2005, and December 31, 2011. Age-specific and sex-specific incidence rates were calculated and adjusted to the 2010 United States white population. Data were merged with previous REP data (1980 to 2004) to assess temporal trends in cataract surgery. Change in the incidence over time was assessed by fitting generalized linear models assuming a Poisson error structure. The probability of second-eye cataract surgery was calculated using the Kaplan-Meier method. RESULTS Included were 8012 cataract surgeries from 2005 through 2011. During this time, incident cataract surgery significantly increased (P < .001), peaking in 2011 with a rate of 1100 per 100 000 (95% confidence interval, 1050–1160). The probability of second-eye surgery 3, 12, and 24 months after first-eye surgery was 60%, 76%, and 86%, respectively, a significant increase compared with the same intervals in the previous 7 years (1998 to 2004) (P < .001). When merged with 1980 to 2004 REP data, incident cataract surgery steadily increased over the past 3 decades (P < .001). CONCLUSION Incident cataract surgery steadily increased over the past 32 years and has not leveled off, as reported in Swedish population-based series. Second-eye surgery was performed sooner and more frequently, with 60% of residents having second-eye surgery within 3-months of first-eye surgery. PMID:23820302

  16. Distributions of personal VOC exposures: a population-based analysis.

    PubMed

    Jia, Chunrong; D'Souza, Jennifer; Batterman, Stuart

    2008-10-01

    Information regarding the distribution of volatile organic compound (VOC) concentrations and exposures is scarce, and there have been few, if any, studies using population-based samples from which representative estimates can be derived. This study characterizes distributions of personal exposures to ten different VOCs in the U.S. measured in the 1999--2000 National Health and Nutrition Examination Survey (NHANES). Personal VOC exposures were collected for 669 individuals over 2-3 days, and measurements were weighted to derive national-level statistics. Four common exposure sources were identified using factor analyses: gasoline vapor and vehicle exhaust, methyl tert-butyl ether (MBTE) as a gasoline additive, tap water disinfection products, and household cleaning products. Benzene, toluene, ethyl benzene, xylenes chloroform, and tetrachloroethene were fit to log-normal distributions with reasonably good agreement to observations. 1,4-Dichlorobenzene and trichloroethene were fit to Pareto distributions, and MTBE to Weibull distribution, but agreement was poor. However, distributions that attempt to match all of the VOC exposure data can lead to incorrect conclusions regarding the level and frequency of the higher exposures. Maximum Gumbel distributions gave generally good fits to extrema, however, they could not fully represent the highest exposures of the NHANES measurements. The analysis suggests that complete models for the distribution of VOC exposures require an approach that combines standard and extreme value distributions, and that carefully identifies outliers. This is the first study to provide national-level and representative statistics regarding the VOC exposures, and its results have important implications for risk assessment and probabilistic analyses.

  17. Young adults' trajectories of Ecstasy use: a population based study.

    PubMed

    Smirnov, Andrew; Najman, Jake M; Hayatbakhsh, Reza; Plotnikova, Maria; Wells, Helene; Legosz, Margot; Kemp, Robert

    2013-11-01

    Young adults' Ecstasy use trajectories have important implications for individual and population-level consequences of Ecstasy use, but little relevant research has been conducted. This study prospectively examines Ecstasy trajectories in a population-based sample. Data are from the Natural History Study of Drug Use, a retrospective/prospective cohort study conducted in Australia. Population screening identified a probability sample of Ecstasy users aged 19-23 years. Complete data for 30 months of follow-up, comprising 4 time intervals, were available for 297 participants (88.4% of sample). Trajectories were derived using cluster analysis based on recent Ecstasy use at each interval. Trajectory predictors were examined using a generalized ordered logit model and included Ecstasy dependence (World Mental Health Composite International Diagnostic Instrument), psychological distress (Hospital Anxiety Depression Scale), aggression (Young Adult Self Report) and contextual factors (e.g. attendance at electronic/dance music events). Three Ecstasy trajectories were identified (low, intermediate and high use). At its peak, the high-use trajectory involved 1-2 days Ecstasy use per week. Decreasing frequency of use was observed for intermediate and high-use trajectories from 12 months, independently of market factors. Intermediate and high-use trajectory membership was predicted by past Ecstasy consumption (>70 pills) and attendance at electronic/dance music events. High-use trajectory members were unlikely to have used Ecstasy for more than 3 years and tended to report consistently positive subjective effects at baseline. Given the social context and temporal course of Ecstasy use, Ecstasy trajectories might be better understood in terms of instrumental rather than addictive drug use patterns.

  18. Stratification of ALS patients' survival: a population-based study.

    PubMed

    Marin, Benoît; Couratier, Philippe; Arcuti, Simona; Copetti, Massimiliano; Fontana, Andrea; Nicol, Marie; Raymondeau, Marie; Logroscino, Giancarlo; Preux, Pierre Marie

    2016-01-01

    The natural history of amyotrophic lateral sclerosis (ALS) and patient risk stratification are areas of considerable research interest. We aimed (1) to describe the survival of a representative cohort of French ALS patients, and (2) to identify covariates associated with various patterns of survival using a risk classification analysis. ALS patients recruited in the FRALim register (2000-2013) were included. Time-to-death analyses were performed using Kaplan-Meier method and Cox model. A recursive partitioning and amalgamation (RECPAM) algorithm analysis identified subgroups of patients with different patterns of survival. Among 322 patients, median survival times were 26.2 and 15.6 months from time of onset and of diagnosis, respectively. Four groups of patients were identified, depending on their baseline characteristics and survival (1) ALSFRS-R slope >0.46/month and definite or probable ALS (median survival time (MST) 10.6 months); (2) ALSFRS-R slope >0.46/month and possible or probable laboratory-supported ALS (MST: 18.1 months); (3) ALSFRS-R slope ≤0.46/month and definite or probable ALS (MST: 22.5 months), and (4) ALSFRS-R slope ≤0.46/month and possible or probable laboratory-supported ALS (MST: 37.6 months). Median survival time is among the shortest ever reported by a worldwide population-based study. This is probably related to the age structure of the patients (the oldest identified to date), driven by the underlying population (30 % of subjects older than 60 years). Further research in the field of risk stratification could help physicians better anticipate prognosis of ALS patients, and help improve the design of randomized controlled trials.

  19. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT.

    PubMed

    Calabrese, Edward J

    2017-04-01

    This paper reveals that nearly 25 years after the National Academy of Sciences (NAS), Biological Effects of Ionizing Radiation (BEIR) I Committee (1972) used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present.

  20. Threshold Concepts in Biochemistry

    ERIC Educational Resources Information Center

    Loertscher, Jennifer

    2011-01-01

    Threshold concepts can be identified for any discipline and provide a framework for linking student learning to curricular design. Threshold concepts represent a transformed understanding of a discipline, without which the learner cannot progress and are therefore pivotal in learning in a discipline. Although threshold concepts have been…

  1. The use of metabolomics in population-based research.

    PubMed

    Su, L Joseph; Fiehn, Oliver; Maruvada, Padma; Moore, Steven C; O'Keefe, Stephen J; Wishart, David S; Zanetti, Krista A

    2014-11-01

    The NIH has made a significant commitment through the NIH Common Fund's Metabolomics Program to build infrastructure and capacity for metabolomics research, which should accelerate the field. Given this investment, it is the ideal time to start planning strategies to capitalize on the infrastructure being established. An obvious gap in the literature relates to the effective use of metabolomics in large-population studies. Although published reports from population-based studies are beginning to emerge, the number to date remains relatively small. Yet, there is great potential for using metabolomics in population-based studies to evaluate the effects of nutritional, pharmaceutical, and environmental exposures (the "exposome"); conduct risk assessments; predict disease development; and diagnose diseases. Currently, the majority of the metabolomics studies in human populations are in nutrition or nutrition-related fields. This symposium provided a timely venue to highlight the current state-of-science on the use of metabolomics in population-based research. This session provided a forum at which investigators with extensive experience in performing research within large initiatives, multi-investigator grants, and epidemiology consortia could stimulate discussion and ideas for population-based metabolomics research and, in turn, improve knowledge to help devise effective methods of health research.

  2. Modeling of current gain compression in common emitter mode of a transistor laser above threshold base current

    NASA Astrophysics Data System (ADS)

    Basu, Rikmantra; Mukhopadhyay, Bratati; Basu, P. K.

    2012-04-01

    We have obtained the expressions for the terminal currents in a heterojunction bipolar transistor laser the base of which contains a quantum well (QW). The emitter-base junction is assumed to be abrupt, leading to abrupt discontinuity in quasi-Fermi level at the interface. The expressions for the terminal currents as a function of collector-emitter and base-emitter voltages are obtained from the solution of the continuity equation. The current density in the QW located at an arbitrary position in the base is related to the virtual state current density. The threshold current density in the QW is calculated by using the expression for gain obtained from Fermi golden rule. The plot of collector current (IC) versus collector-emitter voltage (VCE) for different values of base current shows the usual transistor characteristics, i.e., a rising portion after a cut-in VCE, and then a saturation behavior. The dc current gain remains constant. However, as the base current exceeds the threshold, a stimulated recombination rate is added to the spontaneous recombination rate and the plots of collector currents become closer for the same increase in base current. This current gain compression is in agreement with the experimental observation. Our calculated values qualitatively agree with other experimental findings; however some features like Early effect do not show up in the calculation.

  3. HMM-ModE – Improved classification using profile hidden Markov models by optimising the discrimination threshold and modifying emission probabilities with negative training sequences

    PubMed Central

    Srivastava, Prashant K; Desai, Dhwani K; Nandi, Soumyadeep; Lynn, Andrew M

    2007-01-01

    Background Profile Hidden Markov Models (HMM) are statistical representations of protein families derived from patterns of sequence conservation in multiple alignments and have been used in identifying remote homologues with considerable success. These conservation patterns arise from fold specific signals, shared across multiple families, and function specific signals unique to the families. The availability of sequences pre-classified according to their function permits the use of negative training sequences to improve the specificity of the HMM, both by optimizing the threshold cutoff and by modifying emission probabilities to minimize the influence of fold-specific signals. A protocol to generate family specific HMMs is described that first constructs a profile HMM from an alignment of the family's sequences and then uses this model to identify sequences belonging to other classes that score above the default threshold (false positives). Ten-fold cross validation is used to optimise the discrimination threshold score for the model. The advent of fast multiple alignment methods enables the use of the profile alignments to align the true and false positive sequences, and the resulting alignments are used to modify the emission probabilities in the original model. Results The protocol, called HMM-ModE, was validated on a set of sequences belonging to six sub-families of the AGC family of kinases. These sequences have an average sequence similarity of 63% among the group though each sub-group has a different substrate specificity. The optimisation of discrimination threshold, by using negative sequences scored against the model improves specificity in test cases from an average of 21% to 98%. Further discrimination by the HMM after modifying model probabilities using negative training sequences is provided in a few cases, the average specificity rising to 99%. Similar improvements were obtained with a sample of G-Protein coupled receptors sub-classified with

  4. The contribution of chromosomal abnormalities to congenital heart defects: a population-based study.

    PubMed

    Hartman, Robert J; Rasmussen, Sonja A; Botto, Lorenzo D; Riehle-Colarusso, Tiffany; Martin, Christa L; Cragan, Janet D; Shin, Mikyong; Correa, Adolfo

    2011-12-01

    We aimed to assess the frequency of chromosomal abnormalities among infants with congenital heart defects (CHDs) in an analysis of population-based surveillance data. We reviewed data from the Metropolitan Atlanta Congenital Defects Program, a population-based birth-defects surveillance system, to assess the frequency of chromosomal abnormalities among live-born infants and fetal deaths with CHDs delivered from January 1, 1994, to December 31, 2005. Among 4430 infants with CHDs, 547 (12.3%) had a chromosomal abnormality. CHDs most likely to be associated with a chromosomal abnormality were interrupted aortic arch (type B and not otherwise specified; 69.2%), atrioventricular septal defect (67.2%), and double-outlet right ventricle (33.3%). The most common chromosomal abnormalities observed were trisomy 21 (52.8%), trisomy 18 (12.8%), 22q11.2 deletion (12.2%), and trisomy 13 (5.7%). In conclusion, in our study, approximately 1 in 8 infants with a CHD had a chromosomal abnormality. Clinicians should have a low threshold at which to obtain testing for chromosomal abnormalities in infants with CHDs, especially those with certain types of CHDs. Use of new technologies that have become recently available (e.g., chromosomal microarray) may increase the identified contribution of chromosomal abnormalities even further.

  5. Population-based public health interventions: innovations in practice, teaching, and management. Part II.

    PubMed

    Keller, Linda Olson; Strohschein, Susan; Schaffer, Marjorie A; Lia-Hoagberg, Betty

    2004-01-01

    The Intervention Wheel is a population-based practice model that encompasses three levels of practice (community, systems, and individual/family) and 17 public health interventions. Each intervention and practice level contributes to improving population health. The Intervention Wheel, previously known as the Public Health Intervention Model, was originally introduced in 1998 by the Minnesota Department of Health, Section of Public Health Nursing (PHN). The model has been widely disseminated and used throughout the United States since that time. The evidence supporting the Intervention Wheel was recently subjected to a rigorous critique by regional and national experts. This critical process, which involved hundreds of public health nurses, resulted in a more robust Intervention Wheel and established the validity of the model. The critique also produced basic steps and best practices for each of the 17 interventions. Part I describes the Intervention Wheel, defines population-based practice, and details the recommended modifications and validation process. Part II provides examples of the innovative ways that the Intervention Wheel is being used in public health/PHN practice, education, and administration. The two articles provide a foundation and vision for population-based PHN practice and direction for improving population health.

  6. Population-based public health interventions: practice-based and evidence-supported. Part I.

    PubMed

    Keller, Linda Olson; Strohschein, Susan; Lia-Hoagberg, Betty; Schaffer, Marjorie A

    2004-01-01

    The Intervention Wheel is a population-based practice model that encompasses three levels of practice (community, systems, and individual/family) and 17 public health interventions. Each intervention and practice level contributes to improving population health. The Intervention Wheel, previously known as the Public Health Intervention Model, was originally introduced in 1998 by the Minnesota Department of Health, Section of Public Health Nursing. The model has been widely disseminated and used throughout the United States since that time. The evidence supporting the Intervention Wheel was recently subjected to a rigorous critique by regional and national experts. This critical process, which involved hundreds of public health nurses, resulted in a more robust Intervention Wheel and established the validity of the model. The critique also produced basic steps and best practices for each of the 17 interventions. Part I describes the Intervention Wheel, defines population-based practice, and details the recommended modifications and validation process. Part II provides examples of the innovative ways that the Intervention Wheel is being used in public health/public health nursing practice, education, and administration. The two articles provide a foundation and vision for population-based public health nursing practice and direction for improving population health.

  7. Convergence between DSM-IV-TR and DSM-5 diagnostic models for personality disorder: evaluation of strategies for establishing diagnostic thresholds.

    PubMed

    Morey, Leslie C; Skodol, Andrew E

    2013-05-01

    The Personality and Personality Disorders Work Group for the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) recommended substantial revisions to the personality disorders (PDs) section of DSM-IV-TR, proposing a hybrid categorical-dimensional model that represented PDs as combinations of core personality dysfunctions and various configurations of maladaptive personality traits. Although the DSM-5 Task Force endorsed the proposal, the Board of Trustees of the American Psychiatric Association (APA) did not, placing the Work Group's model in DSM-5 Section III ("Emerging Measures and Models") with other concepts thought to be in need of additional research. This paper documents the impact of using this alternative model in a national sample of 337 patients as described by clinicians familiar with their cases. In particular, the analyses focus on alternative strategies considered by the Work Group for deriving decision rules, or diagnostic thresholds, with which to assign categorical diagnoses. Results demonstrate that diagnostic rules could be derived that yielded appreciable correspondence between DSM-IV-TR and proposed DSM-5 PD diagnoses-correspondence greater than that observed in the transition between DSM-III and DSM-III-R PDs. The approach also represents the most comprehensive attempt to date to provide conceptual and empirical justification for diagnostic thresholds utilized within the DSM PDs.

  8. Continuous bilateral infusion of vigabatrin into the subthalamic nucleus: Effects on seizure threshold and GABA metabolism in two rat models.

    PubMed

    Gey, Laura; Gernert, Manuela; Löscher, Wolfgang

    2016-07-01

    The subthalamic nucleus (STN) plays a crucial role as a regulator of basal ganglia outflow but also influences the activity of cortical and limbic structures, so that it is widely used as a therapeutic target in different brain diseases, including epilepsy. In addition to electrical stimulation of the STN, targeted delivery of anti-seizure drugs to the STN may constitute an alternative treatment approach in patients with pharmacoresistant epilepsy. In the present experimental study, we investigated the anti-seizure and adverse effects of chronic infusion of vigabatrin into the STN of rats. Vigabatrin is a clinically approved anti-seizure drug, which acts by increasing brain GABA levels by irreversibly inhibiting GABA-aminotransferase (GABA-T). Based on functional and neurochemical effects of acute STN microinjection, doses for continuous infusion were calculated and administered, using an innovative drug infusion technology. Bilateral infusion of only 10μg/day vigabatrin over 3weeks into the STN resulted in an almost complete inhibition of GABA-T and 4-fold increase in GABA in the target region, which was associated with a significant increase in seizure threshold, determined once weekly by i.v. infusion of pentylenetetrazole (PTZ). Lower doses or unilateral infusion were less effective, both on PTZ seizures and on kindled seizures. Bilateral infusion into substantia nigra pars reticulata was less effective and more toxic than STN infusion. In part of the rats, tolerance to the anti-seizure effect developed. The data demonstrate that chronic administration of very low, nontoxic doses of vigabatrin into STN is an effective means of increasing local GABA concentrations and seizure threshold.

  9. Oscillatory Threshold Logic

    PubMed Central

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory. PMID:23173034

  10. Prevalence of microcephaly in Europe: population based study

    PubMed Central

    Rankin, Judith; Garne, Ester; Loane, Maria; Greenlees, Ruth; Addor, Marie-Claude; Arriola, Larraitz; Barisic, Ingeborg; Bergman, Jorieke E H; Csaky-Szunyogh, Melinda; Dias, Carlos; Draper, Elizabeth S; Gatt, Miriam; Khoshnood, Babak; Klungsoyr, Kari; Kurinczuk, Jennifer J; Lynch, Catherine; McDonnell, Robert; Nelen, Vera; Neville, Amanda J; O’Mahony, Mary T; Pierini, Anna; Randrianaivo, Hanitra; Rissmann, Anke; Tucker, David; Verellen-Dumoulin, Christine; de Walle, Hermien E K; Wellesley, Diana; Wiesel, Awi; Dolk, Helen

    2016-01-01

    Objectives To provide contemporary estimates of the prevalence of microcephaly in Europe, determine if the diagnosis of microcephaly is consistent across Europe, and evaluate whether changes in prevalence would be detected using the current European surveillance performed by EUROCAT (the European Surveillance of Congenital Anomalies). Design Questionnaire and population based observational study. Setting 24 EUROCAT registries covering 570 000 births annually in 15 countries. Participants Cases of microcephaly not associated with a genetic condition among live births, fetal deaths from 20 weeks’ gestation, and terminations of pregnancy for fetal anomaly at any gestation. Main outcome measures Prevalence of microcephaly (1 Jan 2003-31 Dec 2012) analysed with random effects Poisson regression models to account for heterogeneity across registries. Results 16 registries responded to the questionnaire, of which 44% (7/16) used the EUROCAT definition of microcephaly (a reduction in the size of the brain with a skull circumference more than 3 SD below the mean for sex, age, and ethnic origin), 19% (3/16) used a 2 SD cut off, 31% (5/16) were reliant on the criteria used by individual clinicians, and one changed criteria between 2003 and 2012. Prevalence of microcephaly in Europe was 1.53 (95% confidence interval 1.16 to 1.96) per 10 000 births, with registries varying from 0.4 (0.2 to 0.7) to 4.3 (3.6 to 5.0) per 10 000 (χ2=338, df=23, I2=93%). Registries with a 3 SD cut off reported a prevalence of 1.74 per 10 000 (0.86 to 2.93) compared with those with the less stringent 2 SD cut off of 1.21 per 10 000 (0.21 to 2.93). The prevalence of microcephaly would need to increase in one year by over 35% in Europe or by over 300% in a single registry to reach statistical significance (P<0.01). Conclusions EUROCAT could detect increases in the prevalence of microcephaly from the Zika virus of a similar magnitude to those observed in Brazil. Because of the rarity

  11. HRS Threshold Adjustment Test

    NASA Astrophysics Data System (ADS)

    Skapik, Joe

    1991-07-01

    This test will determine the optimal, non-standard discriminator thresholds for the few anomalous channels on each HRS detector. A 15 second flat field observation followed by a 210 second dark count is performed at each of 10 discriminator threshold values for each detector. The result of the test will be the optimal threshold values to be entered into the PDB. Edited 4/30/91 to add comments to disable/re-enable cross-talk tables.

  12. Optimizing Systems of Threshold Detection Sensors

    DTIC Science & Technology

    2008-03-01

    mean of the "no event" distribution, we use mathematical nonlinear programming techniques to determine appropriate individual thresholds to maximize...decreases in all thresholds (less than five to ten percent) result in modest nonlinear percentage increases in detection performance (again, less than ten...level. In this thesis, we develop a model using nonlinear mathematical programming techniques to determine appropriate individual thresholds at

  13. Parallel genetic algorithm with population-based sampling approach to discrete optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Subramanian, Nithya

    the laminate stiffness matrix implements a square fiber model with a fiber volume fraction sample. The calculations to establish the expected values of constraints and fitness values use the Classical Laminate Theory. The non-deterministic constraints enforced include the probability of satisfying the Tsai-Hill failure criterion and the maximum strain limit. The results from a deterministic optimization, optimization under uncertainty using Monte Carlo sampling and Population-Based Sampling are studied. Also, the work investigates the effectiveness of running the fitness analyses in parallel and the sampling scheme in parallel. Overall, the work conducted for this thesis demonstrated the efficacy of the GA with Population-Based Sampling for the focus problem and established improvements over previous implementations of the GA with PBS.

  14. The novel antiepileptic drug imepitoin compares favourably to other GABA-mimetic drugs in a seizure threshold model in mice and dogs.

    PubMed

    Löscher, Wolfgang; Hoffmann, Katrin; Twele, Friederike; Potschka, Heidrun; Töllner, Kathrin

    2013-11-01

    Recently, the imidazolinone derivative imepitoin has been approved for treatment of canine epilepsy. Imepitoin acts as a low-affinity partial agonist at the benzodiazepine (BZD) site of the GABAA receptor and is the first compound with such mechanism that has been developed as an antiepileptic drug (AED). This mechanism offers several advantages compared to full agonists, including less severe adverse effects and a lack of tolerance and dependence liability, which has been demonstrated in rodents, dogs, and nonhuman primates. In clinical trials in epileptic dogs, imepitoin was shown to be an effective and safe AED. Recently, seizures in dogs have been proposed as a translational platform for human therapeutic trials on new epilepsy treatments. In the present study, we compared the anticonvulsant efficacy of imepitoin, phenobarbital and the high-affinity partial BZD agonist abecarnil in the timed i.v. pentylenetetrazole (PTZ) seizure threshold test in dogs and, for comparison, in mice. Furthermore, adverse effects of treatments were compared in both species. All drugs dose-dependently increased the PTZ threshold in both species, but anticonvulsant efficacy was higher in dogs than mice. At the doses selected for this study, imepitoin was slightly less potent than phenobarbital in increasing seizure threshold, but markedly more tolerable in both species. Effective doses of imepitoin in the PTZ seizure model were in the same range as those suppressing spontaneous recurrent seizures in epileptic dogs. The study demonstrates that low-affinity partial agonists at the benzodiazepine site of the GABAA receptor, such as imepitoin, offer advantages as a new category of AEDs.

  15. Application of the predicted heat strain model in development of localized, threshold-based heat stress management guidelines for the construction industry.

    PubMed

    Rowlinson, Steve; Jia, Yunyan Andrea

    2014-04-01

    Existing heat stress risk management guidelines recommended by international standards are not practical for the construction industry which needs site supervision staff to make instant managerial decisions to mitigate heat risks. The ability of the predicted heat strain (PHS) model [ISO 7933 (2004). Ergonomics of the thermal environment analytical determination and interpretation of heat stress using calculation of the predicted heat strain. Geneva: International Standard Organisation] to predict maximum allowable exposure time (D lim) has now enabled development of localized, action-triggering and threshold-based guidelines for implementation by lay frontline staff on construction sites. This article presents a protocol for development of two heat stress management tools by applying the PHS model to its full potential. One of the tools is developed to facilitate managerial decisions on an optimized work-rest regimen for paced work. The other tool is developed to enable workers' self-regulation during self-paced work.

  16. Reentry and Ectopic Pacemakers Emerge in a Three-Dimensional Model for a Slab of Cardiac Tissue with Diffuse Microfibrosis near the Percolation Threshold

    PubMed Central

    dos Santos, Rodrigo Weber; Bär, Markus

    2016-01-01

    Arrhythmias in cardiac tissue are generally associated with irregular electrical wave propagation in the heart. Cardiac tissue is formed by a discrete cell network, which is often heterogeneous. Recently, it was shown in simulations of two-dimensional (2D) discrete models of cardiac tissue that a wave crossing a fibrotic, heterogeneous region may produce reentry and transient or persistent ectopic activity provided the fraction of conducting connections is just above the percolation threshold. Here, we investigate the occurrence of these phenomena in three-dimensions by simulations of a discrete model representing a thin slab of cardiac tissue. This is motivated (i) by the necessity to study the relevance and properties of the percolation-related mechanism for the emergence of microreentries in three dimensions and (ii) by the fact that atrial tissue is quite thin in comparison with ventricular tissue. Here, we simplify the model by neglecting details of tissue anatomy, e. g. geometries of atria or ventricles and the anisotropy in the conductivity. Hence, our modeling study is confined to the investigation of the effect of the tissue thickness as well as to the comparison of the dynamics of electrical excitation in a 2D layer with the one in a 3D slab. Our results indicate a strong and non-trivial effect of the thickness even for thin tissue slabs on the probability of microreentries and ectopic beat generation. The strong correlation of the occurrence of microreentry with the percolation threshold reported earlier in 2D layers persists in 3D slabs. Finally, a qualitative agreement of 3D simulated electrograms in the fibrotic region with the experimentally observed complex fractional atrial electrograms (CFAE) as well as strong difference between simulated electrograms in 2D and 3D were found for the cases where reentry and ectopic activity were triggered by the micro-fibrotic region. PMID:27875591

  17. Reentry and Ectopic Pacemakers Emerge in a Three-Dimensional Model for a Slab of Cardiac Tissue with Diffuse Microfibrosis near the Percolation Threshold.

    PubMed

    Alonso, Sergio; Dos Santos, Rodrigo Weber; Bär, Markus

    2016-01-01

    Arrhythmias in cardiac tissue are generally associated with irregular electrical wave propagation in the heart. Cardiac tissue is formed by a discrete cell network, which is often heterogeneous. Recently, it was shown in simulations of two-dimensional (2D) discrete models of cardiac tissue that a wave crossing a fibrotic, heterogeneous region may produce reentry and transient or persistent ectopic activity provided the fraction of conducting connections is just above the percolation threshold. Here, we investigate the occurrence of these phenomena in three-dimensions by simulations of a discrete model representing a thin slab of cardiac tissue. This is motivated (i) by the necessity to study the relevance and properties of the percolation-related mechanism for the emergence of microreentries in three dimensions and (ii) by the fact that atrial tissue is quite thin in comparison with ventricular tissue. Here, we simplify the model by neglecting details of tissue anatomy, e. g. geometries of atria or ventricles and the anisotropy in the conductivity. Hence, our modeling study is confined to the investigation of the effect of the tissue thickness as well as to the comparison of the dynamics of electrical excitation in a 2D layer with the one in a 3D slab. Our results indicate a strong and non-trivial effect of the thickness even for thin tissue slabs on the probability of microreentries and ectopic beat generation. The strong correlation of the occurrence of microreentry with the percolation threshold reported earlier in 2D layers persists in 3D slabs. Finally, a qualitative agreement of 3D simulated electrograms in the fibrotic region with the experimentally observed complex fractional atrial electrograms (CFAE) as well as strong difference between simulated electrograms in 2D and 3D were found for the cases where reentry and ectopic activity were triggered by the micro-fibrotic region.

  18. An obesity/cardiometabolic risk reduction disease management program: a population-based approach.

    PubMed

    Villagra, Victor G

    2009-04-01

    Obesity is a critical health concern that has captured the attention of public and private healthcare payers who are interested in controlling costs and mitigating the long-term economic consequences of the obesity epidemic. Population-based approaches to obesity management have been proposed that take advantage of a chronic care model (CCM), including patient self-care, the use of community-based resources, and the realization of care continuity through ongoing communications with patients, information technology, and public policy changes. Payer-sponsored disease management programs represent an important conduit to delivering population-based care founded on similar CCM concepts. Disease management is founded on population-based disease identification, evidence-based care protocols, and collaborative practices between clinicians. While substantial clinician training, technology infrastructure commitments, and financial support at the payer level will be needed for the success of disease management programs in obesity and cardiometabolic risk reduction, these barriers can be overcome with the proper commitment. Disease management programs represent an important tool to combat the growing societal risks of overweight and obesity.

  19. Detectability thresholds of general modular graphs

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2017-01-01

    We investigate the detectability thresholds of various modular structures in the stochastic block model. Our analysis reveals how the detectability threshold is related to the details of the modular pattern, including the hierarchy of the clusters. We show that certain planted structures are impossible to infer regardless of their fuzziness.

  20. Thresholds in chemical respiratory sensitisation.

    PubMed

    Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian

    2015-07-03

    There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the

  1. Changing roles of population-based cancer registries in Australia.

    PubMed

    Roder, David; Creighton, Nicola; Baker, Deborah; Walton, Richard; Aranda, Sanchia; Currow, David

    2015-09-01

    Registries have key roles in cancer incidence, mortality and survival monitoring and in showing disparities across the population. Incidence monitoring began in New South Wales in 1972 and other jurisdictions soon followed. Registry data are used to evaluate outcomes of preventive, screening, treatment and support services. They have shown decreases in cancer incidence following interventions and have been used for workforce and other infrastructure planning. Crude markers of optimal radiotherapy and chemotherapy exist and registry data are used to show shortfalls against these markers. The data are also used to investigate cancer clusters and environmental concerns. Survival data are used to assess service performance and interval cancer data are used in screening accreditation. Registries enable determination of risk of multiple primary cancers. Clinical quality registries are used for clinical quality improvement. Population-based cancer registries and linked administrative data complement clinical registries by providing high-level system-wide data. The USA Commission on Cancer has long used registries for quality assurance and service accreditation. Increasingly population-based registry data in Australia are linked with administrative data on service delivery to assess system performance. Addition oftumour stage and otherprognostic indicators is important forthese analyses and is facilitated by the roll-out of structured pathology reporting. Data linkage with administrative data, following checks on the quality of these data, enables assessment of patterns of care and other performance indicators for health-system monitoring. Australian cancer registries have evolved and increasingly are contributing to broader information networks for health system management.

  2. New population based reference values for spinal mobility measures based on the NHANES 2009–10

    PubMed Central

    Assassi, Shervin; Weisman, Michael H.; Lee, MinJae; Savage, Laurie; Diekman, Laura; Graham, Tiffany A.; Rahbar, Mohammad H.; Schall, Joan I.; Gensler, Lianne S.; Deodhar, Atul A.; Clegg, Daniel O.; Colbert, Robert A.; Reveille, John D.

    2014-01-01

    Objective To report population based percentile reference values for selected spinal mobility measures in a nationally representative sample of 5103 U.S. adults ages 20–69 years examined in the 2009–10 U.S. National Health and Nutrition Examination Survey (NHANES). Methods Occiput-to-Wall Distance (OWD), Thoracic Expansion (TE), and Anterior Lumbar Flexion (ALF – modified Schober test) were measured by trained examiners in a standardized fashion. TE was measured at the xyphosternal level while the lower reference point for ALF was a line marked at the level of the superior margin of the lateral iliac crests. We report reference values based on the 95th percentile of OWD and 5th percentile of TE and ALF measurements, as well as other summary statistics for these measures in the study population. Results An OWD of more than zero was present in 3.8 % of participants while 8.8% of participants had out of range values for TE based the commonly used threshold of 2.5 cm. The 95th percentile of OWD measurement was zero while the 5th percentile measurements for TE and ALF were 1.9 and 2 cm, respectively. The spinal measures were significantly associated with gender, age, ethnicity, height, and body mass index. Exclusion of individuals with severe obesity (BMI > 35) changed the proposed reference values for TE and ALF to 2.2 and 1.9 cm, respectively. Conclusion We verified the reference value of zero for OWD. Using the reported population based percentile values, new reference values for TE and the ALF can be derived. PMID:24782356

  3. Case definitions for use in population-based surveillance of periodontitis.

    PubMed

    Page, Roy C; Eke, Paul I

    2007-07-01

    Many definitions of periodontitis have been used in the literature for population-based studies, but there is no accepted standard. In early epidemiologic studies, the two major periodontal diseases, gingivitis and periodontitis, were combined and considered to be a continuum. National United States surveys were conducted in 1960 to 1962, 1971 to 1974, 1981, 1985 to 1986, 1988 to 1994, and 1999 to 2000. The case definitions and protocols used in the six national surveys reflect a continuing evolution and improvement over time. Generally, the clinical diagnosis of periodontitis is based on measures of probing depth (PD), clinical attachment level (CAL), the radiographic pattern and extent of alveolar bone loss, gingival inflammation measured as bleeding on probing, or a combination of these measures. Several other patient characteristics are considered, and several factors, such as age, can affect measurements of PD and CAL. Accuracy and reproducibility of measurements of PD and CAL are important because case definitions for periodontitis are based largely on either or both measurements, and relatively small changes in these values can result in large changes in disease prevalence. The classification currently accepted by the American Academy of Periodontology (AAP) was devised by the 1999 International Workshop for a Classification of Periodontal Diseases and Conditions. However, in 2003 the Centers for Disease Control and Prevention and the AAP appointed a working group to develop further standardized clinical case definitions for population-based studies of periodontitis. This classification defines severe periodontitis and moderate periodontitis in terms of PD and CAL to enhance case definitions and further demonstrates the importance of thresholds of PD and CAL and the number of affected sites when determining prevalence.

  4. Prediction model for cadmium transfer from soil to carrot (Daucus carota L.) and its application to derive soil thresholds for food safety.

    PubMed

    Ding, Changfeng; Zhang, Taolin; Wang, Xingxiang; Zhou, Fen; Yang, Yiru; Huang, Guifeng

    2013-10-30

    At present, soil quality standards used for agriculture do not fully consider the influence of soil properties on cadmium (Cd) uptake by crops. This study aimed to develop prediction models for Cd transfer from a wide range of Chinese soils to carrot (Daucus carota L.) using soil properties and the total or available soil Cd content. Path analysis showed soil pH and organic carbon (OC) content were the two most significant properties exhibiting direct effects on Cd uptake factor (ratio of Cd concentration in carrot to that in soil). Stepwise multiple linear regression analysis also showed that total soil Cd, pH, and OC were significant variables contributing to carrot Cd concentration, explaining 90% of the variance across the 21 soils. Soil thresholds for carrot (cultivar New Kuroda) cropping based on added or total Cd were then derived from the food safety standard and were presented as continuous or scenario criteria.

  5. A model for threshold voltage shift under negative gate bias stress in amorphous InGaZnO thin film transistors

    NASA Astrophysics Data System (ADS)

    Xu, Piao-Rong; Yao, Ruo-He

    2015-12-01

    In the amorphous InGaZnO thin film transistors (a-IGZO TFTs) with high concentration of oxygen vacancy, the energy level of oxygen vacancy-related donor-like states in a-IGZO films near the gate insulator moves upwards under the negative gate bias stress (NGBS). The electrons in the donor-like states above the midgap are emitted to the conduction band, making the donor-like states positively charged. These positively charged donor-like states accumulate near the interface of the a-IGZO films and gate insulator and screen the gate voltage, thus leading to the negative shift of the threshold voltage (Vth) of a-IGZO TFTs. In this article we establish a physical model of Vth shift in the negative direction under NGBS, and the results are consistent with the experimental results.

  6. [The analysis of threshold effect using Empower Stats software].

    PubMed

    Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan

    2013-11-01

    In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.

  7. Winner's Curse Correction and Variable Thresholding Improve Performance of Polygenic Risk Modeling Based on Genome-Wide Association Study Summary-Level Data

    PubMed Central

    Shi, Jianxin; Duan, Jubao; Berndt, Sonja T.; Moy, Winton; Yu, Kai; Song, Lei; Wheeler, William; Hua, Xing; Silverman, Debra; Garcia-Closas, Montserrat; Hsiung, Chao Agnes; Figueroa, Jonine D.; Cortessis, Victoria K.; Malats, Núria; Karagas, Margaret R.; Vineis, Paolo; Chang, I-Shou; Lin, Dongxin; Zhou, Baosen; Seow, Adeline; Hong, Yun-Chul; Caporaso, Neil E.; Wolpin, Brian; Jacobs, Eric; Petersen, Gloria M.; Klein, Alison P.; Li, Donghui; Risch, Harvey; Sanders, Alan R.; Hsu, Li; Schoen, Robert E.; Brenner, Hermann; Stolzenberg-Solomon, Rachael; Gejman, Pablo; Lan, Qing; Rothman, Nathaniel; Amundadottir, Laufey T.; Landi, Maria Teresa; Levinson, Douglas F.; Chanock, Stephen J.; Chatterjee, Nilanjan

    2016-01-01

    Recent heritability analyses have indicated that genome-wide association studies (GWAS) have the potential to improve genetic risk prediction for complex diseases based on polygenic risk score (PRS), a simple modelling technique that can be implemented using summary-level data from the discovery samples. We herein propose modifications to improve the performance of PRS. We introduce threshold-dependent winner’s-curse adjustments for marginal association coefficients that are used to weight the single-nucleotide polymorphisms (SNPs) in PRS. Further, as a way to incorporate external functional/annotation knowledge that could identify subsets of SNPs highly enriched for associations, we propose variable thresholds for SNPs selection. We applied our methods to GWAS summary-level data of 14 complex diseases. Across all diseases, a simple winner’s curse correction uniformly led to enhancement of performance of the models, whereas incorporation of functional SNPs was beneficial only for selected diseases. Compared to the standard PRS algorithm, the proposed methods in combination led to notable gain in efficiency (25–50% increase in the prediction R2) for 5 of 14 diseases. As an example, for GWAS of type 2 diabetes, winner’s curse correction improved prediction R2 from 2.29% based on the standard PRS to 3.10% (P = 0.0017) and incorporating functional annotation data further improved R2 to 3.53% (P = 2×10−5). Our simulation studies illustrate why differential treatment of certain categories of functional SNPs, even when shown to be highly enriched for GWAS-heritability, does not lead to proportionate improvement in genetic risk-prediction because of non-uniform linkage disequilibrium structure. PMID:28036406

  8. Winner's Curse Correction and Variable Thresholding Improve Performance of Polygenic Risk Modeling Based on Genome-Wide Association Study Summary-Level Data.

    PubMed

    Shi, Jianxin; Park, Ju-Hyun; Duan, Jubao; Berndt, Sonja T; Moy, Winton; Yu, Kai; Song, Lei; Wheeler, William; Hua, Xing; Silverman, Debra; Garcia-Closas, Montserrat; Hsiung, Chao Agnes; Figueroa, Jonine D; Cortessis, Victoria K; Malats, Núria; Karagas, Margaret R; Vineis, Paolo; Chang, I-Shou; Lin, Dongxin; Zhou, Baosen; Seow, Adeline; Matsuo, Keitaro; Hong, Yun-Chul; Caporaso, Neil E; Wolpin, Brian; Jacobs, Eric; Petersen, Gloria M; Klein, Alison P; Li, Donghui; Risch, Harvey; Sanders, Alan R; Hsu, Li; Schoen, Robert E; Brenner, Hermann; Stolzenberg-Solomon, Rachael; Gejman, Pablo; Lan, Qing; Rothman, Nathaniel; Amundadottir, Laufey T; Landi, Maria Teresa; Levinson, Douglas F; Chanock, Stephen J; Chatterjee, Nilanjan

    2016-12-01

    Recent heritability analyses have indicated that genome-wide association studies (GWAS) have the potential to improve genetic risk prediction for complex diseases based on polygenic risk score (PRS), a simple modelling technique that can be implemented using summary-level data from the discovery samples. We herein propose modifications to improve the performance of PRS. We introduce threshold-dependent winner's-curse adjustments for marginal association coefficients that are used to weight the single-nucleotide polymorphisms (SNPs) in PRS. Further, as a way to incorporate external functional/annotation knowledge that could identify subsets of SNPs highly enriched for associations, we propose variable thresholds for SNPs selection. We applied our methods to GWAS summary-level data of 14 complex diseases. Across all diseases, a simple winner's curse correction uniformly led to enhancement of performance of the models, whereas incorporation of functional SNPs was beneficial only for selected diseases. Compared to the standard PRS algorithm, the proposed methods in combination led to notable gain in efficiency (25-50% increase in the prediction R2) for 5 of 14 diseases. As an example, for GWAS of type 2 diabetes, winner's curse correction improved prediction R2 from 2.29% based on the standard PRS to 3.10% (P = 0.0017) and incorporating functional annotation data further improved R2 to 3.53% (P = 2×10-5). Our simulation studies illustrate why differential treatment of certain categories of functional SNPs, even when shown to be highly enriched for GWAS-heritability, does not lead to proportionate improvement in genetic risk-prediction because of non-uniform linkage disequilibrium structure.

  9. Visible Lesion Thresholds and Model Predictions for Q-Switched 1318-nm and 1540-nm Laser Exposures to Porcine Skin

    DTIC Science & Technology

    2006-01-01

    collected using the Yucatan mini-pig ( Sus scrofa domestica) as the in vivo model. The Yucatan mini-pig was selected due to the similarity of its flank...Johnson, M. A. Mitchell, B. H. Saladino, and W. P. Roach, "Median Effective Dose Determination and Histologic Characterization of Porcine ( Sus scrofa domestica

  10. Modeling weather and stocking rate threshold effects on forage and steer production in northern mixed-grass prairie

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Model evaluations of forage production and yearling steer weight gain (SWG) responses to stocking density (SD) and seasonal weather patterns are presented for semi-arid northern mixed-grass prairie. We used the improved Great Plains Framework for Agricultural Resource Management-Range (GPFARM-Range)...

  11. Bayesian inference of the groundwater depth threshold in a vegetation dynamic model: a case study, lower reach, Tarim River

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The responses of eco-hydrological systems to anthropogenic and natural disturbances have attracted much attention in recent years. The coupling and simulating feedback between hydrological and ecological components have been realized in several recently developed eco-hydrological models. However, li...

  12. Comparison of repeatability and multiple trait threshold models for litter size in sheep using observed and simulated data in Bayesian analyses.

    PubMed

    Mekkawy, W; Roehe, R; Lewis, R M; Davies, M H; Bünger, L; Simm, G; Haresign, W

    2010-08-01

    Bayesian analyses were used to estimate genetic parameters on 5580 records of litter size in the first four parities from 1758 Mule ewes. To examine the appropriateness of fitting repeatability (RM) or multiple trait threshold models (MTM) to litter size of different parities, both models were used to estimate genetic parameters on the observed data and were thereafter compared in a simulation study. Posterior means of the heritabilities of litter size in different parities using a MTM ranged from 0.12 to 0.18 and were higher than the heritability based on the RM (0.08). Posterior means of the genetic correlations between litter sizes of different parities were positive and ranged from 0.24 to 0.71. Data sets were simulated based on the same pedigree structure and genetic parameters of the Mule ewe population obtained from both models. The simulation showed that the relative loss in accuracy and increase in mean squared error (MSE) was substantially higher when using the RM, given that the parameters estimated from the observed data using the opposite model are the true parameters. In contrast, Bayesian information criterion (BIC) selected the RM as most appropriate model given the data because of substantial penalty for the higher number of parameters to be estimated in the MTM model. In conclusion, when the relative change in accuracy and MSE is of main interest for estimation of breeding values of litter size of different parities, the MTM is recommended for the given population. When reduction in risk of using the wrong model is the main aim, the BIC suggest that the RM is the most appropriate model.

  13. Porcine Skin Visible Lesion Thresholds for Near-Infrared Lasers Including Modeling at Two Pulse Durations and Spot Sizes

    DTIC Science & Technology

    2006-08-01

    2 Experimental setup tor thermal dynamics imaging experiment. Reading of skin exposure sites was performed acutely at one hour, and 24-h...Phoenix model, Indigo Systems, Santa Barbara, California). To acquire reference IR image frames prior to laser exposure, the IR camera was operating in...free-running mode at a frame rate of 100 Hz and image size of 256 X 256 pixels. The IR camera lens was extended to provide spatial resolution

  14. National nephrectomy registries: Reviewing the need for population-based data.

    PubMed

    Pearson, John; Williamson, Timothy; Ischia, Joseph; Bolton, Damien M; Frydenberg, Mark; Lawrentschuk, Nathan

    2015-09-01

    Nephrectomy is the cornerstone therapy for renal cell carcinoma (RCC) and continued refinement of the procedure through research may enhance patient outcomes. A national nephrectomy registry may provide the key information needed to assess the procedure at a national level. The aim of this study was to review nephrectomy data available at a population-based level in Australia and to benchmark these data against data from the rest of the world as an examination of the national nephrectomy registry model. A PubMed search identified records pertaining to RCC nephrectomy in Australia. A similar search identified records relating to established nephrectomy registries internationally and other surgical registries of clinical importance. These records were reviewed to address the stated aims of this article. Population-based data within Australia for nephrectomy were lacking. Key issues identified were the difficulty in benchmarking outcomes and no ongoing monitoring of trends. The care centralization debate, which questions whether small-volume centers provide comparable outcomes to high-volume centers, is ongoing. Patterns of adherence and the effectiveness of existing protocols are uncertain. A review of established international registries demonstrated that the registry model can effectively address issues comparable to those identified in the Australian literature. A national nephrectomy registry could address deficiencies identified in a given nation's nephrectomy field. The model is supported by evidence from international examples and will provide the population-based data needed for studies. Scope exists for possible integration with other registries to develop a more encompassing urological or surgical registry. Need remains for further exploration of the feasibility and practicalities of initiating such a registry including a minimum data set, outcome indicators, and auditing of data.

  15. Population-based case-control association studies.

    PubMed

    Hancock, Dana B; Scott, William K

    2012-07-01

    This unit provides an overview of the design and analysis of population-based case-control studies of genetic risk factors for complex disease. Considerations specific to genetic studies are emphasized. The unit reviews basic study designs differentiating case-control studies from others, presents different genetic association strategies (candidate gene, genome-wide association, and high-throughput sequencing), introduces basic methods of statistical analysis for case-control data and approaches to combining case-control studies, and discusses measures of association and impact. Admixed populations, controlling for confounding (including population stratification), consideration of multiple loci and environmental risk factors, and complementary analyses of haplotypes, genes, and pathways are briefly discussed. Readers are referred to basic texts on epidemiology for more details on general conduct of case-control studies.

  16. Predictors of Childhood Anxiety: A Population-Based Cohort Study

    PubMed Central

    2015-01-01

    Background Few studies have explored predictors of early childhood anxiety. Objective To determine the prenatal, postnatal, and early life predictors of childhood anxiety by age 5. Methods Population-based, provincial administrative data (N = 19,316) from Manitoba, Canada were used to determine the association between demographic, obstetrical, psychosocial, medical, behavioral, and infant factors on childhood anxiety. Results Risk factors for childhood anxiety by age 5 included maternal psychological distress from birth to 12 months and 13 months to 5 years post-delivery and an infant 5-minute Apgar score of ≤7. Factors associated with decreased risk included maternal age < 20 years, multiparity, and preterm birth. Conclusion Identifying predictors of childhood anxiety is a key step to early detection and prevention. Maternal psychological distress is an early, modifiable risk factor. Future research should aim to disentangle early life influences on childhood anxiety occurring in the prenatal, postnatal, and early childhood periods. PMID:26158268

  17. Photodissociation spectroscopy of stored CH+ ions: Detection, assignment, and close-coupled modeling of near-threshold Feshbach resonances

    NASA Astrophysics Data System (ADS)

    Hechtfischer, Ulrich; Williams, Carl J.; Lange, Michael; Linkemann, Joachim; Schwalm, Dirk; Wester, Roland; Wolf, Andreas; Zajfman, Daniel

    2002-11-01

    We have measured and theoretically analyzed a photodissociation spectrum of the CH+ molecular ion in which most observed energy levels lie within the fine-structure splitting of the C+ fragment and predissociate, and where the observed irregular line shapes and dipole-forbidden transitions indicate that nonadiabatic interactions lead to multichannel dynamics. The molecules were prepared in low rotational levels J''=0-9 of the vibrational ground state X 1Sigma+ (v'')=0 by storing a CH+ beam at 7.1 MeV in the heavy-ion storage ring TSR for up to 30 s, which was sufficient for the ions to rovibrationally thermalize to room temperature by spontaneous infrared emission. The internally cold molecules were irradiated with a dye laser at photon energies between 31 600-33 400 cm-1, and the resulting C+ fragments were counted with a particle detector. The photodissociation cross section displays the numerous Feshbach resonances between the two C+ fine-structure states predicted by theory for low rotation. The data are analyzed in two steps. First, from the overall structure of the spectrum, by identifying branches, and by a Le Roy-Bernstein analysis of level spacings we determine the dissociation energy D0=(32 946.7plus-or-minus1.1) cm-1 (with respect to the lower fine-structure limit) and assign the strongest features to the vibrational levels v'=11-14 of the dipole-allowed A 1Pi state. The majority of the 66 observed resonances cannot be assigned in this way. Therefore, in a second step, the complete spectrum is simulated with a close-coupling model, starting from recent ab initio Born-Oppenheimer potentials. For the long-range induction, dispersion and exchange energies, we propose an analytical expression and derive the C6 coefficients. After a systematic variation of just the vibrational defects of the four Born-Oppenheimer potentials involved, the close-coupling model yields a quantitative fit to the measured cross section in all detail, and is used to assign most of

  18. Effective theories and thresholds in particle physics

    SciTech Connect

    Gaillard, M.K.

    1991-06-07

    The role of effective theories in probing a more fundamental underlying theory and in indicating new physics thresholds is discussed, with examples from the standard model and more speculative applications to superstring theory. 38 refs.

  19. Roots at the Percolation Threshold

    NASA Astrophysics Data System (ADS)

    Kroener, E.; Ahmed, M. A.; Kaestner, A.; Vontobel, P.; Zarebanadkouki, M.; Carminati, A.

    2014-12-01

    Much of the carbon assimilated by plants during photosynthesis is lost to the soil via rhizodepositions. One component of rhizopdeposition is mucilage, a hydrogel that dramatically alters the soil physical properties. Mucilage was assumed to explain unexpectedly low rhizosphere rewetting rates during irrigation (Carminati et al. 2010) and temporarily water repellency in the rhizosphere after severe drying (Moradi et al. 2012).Here, we present an experimental and theoretical study for the rewetting behaviour of a soil mixed with mucilage, which was used as an analogue of the rhizosphere. Our samples were made of two layers of untreated soils separated by a thin layer (ca. 1 mm) of soil treated with mucilage. We prepared soil columns of varying particle size, mucilage concentration and height of the middle layer above the water table. The dry soil columns were re-wetted by capillary rise from the bottom.The rewetting of the middle layer showed a distinct dual behavior. For mucilage concentrations lower than a certain threshold, water could cross the thin layer almost immediately after rewetting of bulk soil. At slightly higher mucilage concentrations, the thin layer was almost impermeable. The mucilage concentration at the threshold strongly depended on particle size: the smaller the particle size the larger the soil specific surface and the more mucilage was needed to cover the entire particle surface and to induce water repellency.We applied a classic pore network model to simulate the experimental observations. In the model a certain fraction of nodes were randomly disconnected to reproduce the effect of mucilage in temporarily blocking the flow. The percolation model could qualitatively reproduce well the threshold characteristics of the experiments. Our experiments, together with former observations of water dynamics in the rhizosphere, suggest that the rhizosphere is near the percolation threshold, where small variations in mucilage concentration sensitively

  20. SU-D-9A-02: Relative Effects of Threshold Choice and Spatial Resolution Modeling On SUV and Volume Quantification in F18-FDG PET Imaging of Anal Cancer Patients

    SciTech Connect

    Zhao, F; Bowsher, J; Palta, M; Czito, B; Willett, C; Yin, F

    2014-06-01

    Purpose: PET imaging with F18-FDG is utilized for treatment planning, treatment assessment, and prognosis. A region of interest (ROI) encompassing the tumor may be determined on the PET image, often by a threshold T on the PET standard uptake values (SUVs). Several studies have shown prognostic value for relevant ROI properties including maximum SUV value (SUVmax), metabolic tumor volume (MTV), and total glycolytic activity (TGA). The choice of threshold T may affect mean SUV value (SUVmean), MTV, and TGA. Recently spatial resolution modeling (SRM) has been introduced on many PET systems. SRM may also affect these ROI properties. The purpose of this work is to investigate the relative influence of SRM and threshold choice T on SUVmean, MTV, TGA, and SUVmax. Methods: For 9 anal cancer patients, 18F-FDG PET scans were performed prior to treatment. PET images were reconstructed by 2 iterations of Ordered Subsets Expectation Maximization (OSEM), with and without SRM. ROI contours were generated by 5 different SUV threshold values T: 2.5, 3.0, 30%, 40%, and 50% of SUVmax. Paired-samples t tests were used to compare SUVmean, MTV, and TGA (a) for SRM on versus off and (b) between each pair of threshold values T. SUVmax was also compared for SRM on versus off. Results: For almost all (57/60) comparisons of 2 different threshold values, SUVmean, MTV, and TGA showed statistically significant variation. For comparison of SRM on versus off, there were no statistically significant changes in SUVmax and TGA, but there were statistically significant changes in MTV for T=2.5 and T=3.0 and in SUVmean for all T. Conclusion: The near-universal statistical significance of threshold choice T suggests that, regarding harmonization across sites, threshold choice may be a greater concern than choice of SRM. However, broader study is warranted, e.g. other iterations of OSEM should be considered.

  1. An in vitro Corneal Model with a Laser Damage Threshold at 2 Micrometers That is Similar to That in the Rabbit

    DTIC Science & Technology

    2007-11-01

    data in the literature. 15. SUBJECT TERMS corneal organotypic culture, laser, threshold, thermography , Probit 16. SECURITY CLASSIFICATION OF...literature. Keywords: corneal organotypic culture, laser, threshold, thermography , Probit 1. INTRODUCTION Use of lasers has become commonplace...temperature increases from exposure to the 2-µm laser were measured using the IR camera during laser exposure to membranes that were dry , wetted from

  2. Towards thresholds of disaster management performance under demographic change: exploring functional relationships using agent-based modeling

    NASA Astrophysics Data System (ADS)

    Dressler, Gunnar; Müller, Birgit; Frank, Karin; Kuhlicke, Christian

    2016-10-01

    Effective disaster management is a core feature for the protection of communities against natural disasters such as floods. Disaster management organizations (DMOs) are expected to contribute to ensuring this protection. However, what happens when their resources to cope with a flood are at stake or the intensity and frequency of the event exceeds their capacities? Many cities in the Free State of Saxony, Germany, were strongly hit by several floods in the last years and are additionally challenged by demographic change, with an ageing society and out-migration leading to population shrinkage in many parts of Saxony. Disaster management, which is mostly volunteer-based in Germany, is particularly affected by this change, leading to a loss of members. We propose an agent-based simulation model that acts as a "virtual lab" to explore the impact of various changes on disaster management performance. Using different scenarios we examine the impact of changes in personal resources of DMOs, their access to operation relevant information, flood characteristics as well as differences between geographic regions. A loss of DMOs and associated manpower caused by demographic change has the most profound impact on the performance. Especially in rural, upstream regions population decline in combination with very short lead times can put disaster management performance at risk.

  3. Mitochondrial threshold effects.

    PubMed Central

    Rossignol, Rodrigue; Faustin, Benjamin; Rocher, Christophe; Malgat, Monique; Mazat, Jean-Pierre; Letellier, Thierry

    2003-01-01

    The study of mitochondrial diseases has revealed dramatic variability in the phenotypic presentation of mitochondrial genetic defects. To attempt to understand this variability, different authors have studied energy metabolism in transmitochondrial cell lines carrying different proportions of various pathogenic mutations in their mitochondrial DNA. The same kinds of experiments have been performed on isolated mitochondria and on tissue biopsies taken from patients with mitochondrial diseases. The results have shown that, in most cases, phenotypic manifestation of the genetic defect occurs only when a threshold level is exceeded, and this phenomenon has been named the 'phenotypic threshold effect'. Subsequently, several authors showed that it was possible to inhibit considerably the activity of a respiratory chain complex, up to a critical value, without affecting the rate of mitochondrial respiration or ATP synthesis. This phenomenon was called the 'biochemical threshold effect'. More recently, quantitative analysis of the effects of various mutations in mitochondrial DNA on the rate of mitochondrial protein synthesis has revealed the existence of a 'translational threshold effect'. In this review these different mitochondrial threshold effects are discussed, along with their molecular bases and the roles that they play in the presentation of mitochondrial diseases. PMID:12467494

  4. Conflict Resolution as Near-Threshold Decision-Making: A Spiking Neural Circuit Model with Two-Stage Competition for Antisaccadic Task.

    PubMed

    Lo, Chung-Chuan; Wang, Xiao-Jing

    2016-08-01

    Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a "Stop" process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor) neural circuit mechanism with discrimination in perception.

  5. Conflict Resolution as Near-Threshold Decision-Making: A Spiking Neural Circuit Model with Two-Stage Competition for Antisaccadic Task

    PubMed Central

    Wang, Xiao-Jing

    2016-01-01

    Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a “Stop” process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor) neural circuit mechanism with discrimination in perception. PMID:27551824

  6. Scaling behavior of threshold epidemics

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Krapivsky, P. L.

    2012-05-01

    We study the classic Susceptible-Infected-Recovered (SIR) model for the spread of an infectious disease. In this stochastic process, there are two competing mechanism: infection and recovery. Susceptible individuals may contract the disease from infected individuals, while infected ones recover from the disease at a constant rate and are never infected again. Our focus is the behavior at the epidemic threshold where the rates of the infection and recovery processes balance. In the infinite population limit, we establish analytically scaling rules for the time-dependent distribution functions that characterize the sizes of the infected and the recovered sub-populations. Using heuristic arguments, we also obtain scaling laws for the size and duration of the epidemic outbreaks as a function of the total population. We perform numerical simulations to verify the scaling predictions and discuss the consequences of these scaling laws for near-threshold epidemic outbreaks.

  7. A threshold voltage model of short-channel fully-depleted recessed-source/drain (Re-S/D) SOI MOSFETs with high-k dielectric

    NASA Astrophysics Data System (ADS)

    Gopi Krishna, Saramekala; Sarvesh, Dubey; Pramod, Kumar Tiwari

    2015-10-01

    In this paper, a surface potential based threshold voltage model of fully-depleted (FD) recessed-source/drain (Re-S/D) silicon-on-insulator (SOI) metal-oxide semiconductor field-effect transistor (MOSFET) is presented while considering the effects of high-k gate-dielectric material induced fringing-field. The two-dimensional (2D) Poisson’s equation is solved in a channel region in order to obtain the surface potential under the assumption of the parabolic potential profile in the transverse direction of the channel with appropriate boundary conditions. The accuracy of the model is verified by comparing the model’s results with the 2D simulation results from ATLAS over a wide range of channel lengths and other parameters, including the dielectric constant of gate-dielectric material. The author, Pramod Kumar Tiwari, was supported by the Science and Engineering Research Board (SERB), Department of Science and Technology, Ministry of Human Resource and Development, Government of India under Young Scientist Research (Grant No. SB/FTP/ETA-415/2012).

  8. Space-time resolved simulation of femtosecond nonlinear light-matter interactions using a holistic quantum atomic model: application to near-threshold harmonics.

    PubMed

    Kolesik, M; Wright, E M; Andreasen, J; Brown, J M; Carlson, D R; Jones, R J

    2012-07-02

    We introduce a new computational approach for femtosecond pulse propagation in the transparency region of gases that permits full resolution in three space dimensions plus time while fully incorporating quantum coherent effects such as high-harmonic generation and strong-field ionization in a holistic fashion. This is achieved by utilizing a one-dimensional model atom with a delta-function potential which allows for a closed-form solution for the nonlinear optical response due to ground-state to continuum transitions. It side-steps evaluation of the wave function, and offers more than one hundred-fold reduction in computation time in comparison to direct solution of the atomic Schrödinger equation. To illustrate the capability of our new computational approach, we apply it to the example of near-threshold harmonic generation in Xenon, and we also present a qualitative comparison between our model and results from an in-house experiment on extreme ultraviolet generation in a femtosecond enhancement cavity.

  9. A population-based study of quantitative sensory testing in adolescents with and without chronic pain.

    PubMed

    Tham, See Wan; Palermo, Tonya M; Holley, Amy Lewandowski; Zhou, Chuan; Stubhaug, Audun; Furberg, Anne-Sofie; Nielsen, Christopher Sivert

    2016-12-01

    Quantitative sensory testing (QST) has been used to characterize pain sensitivity in individuals with and without pain conditions. Research remains limited in pediatric populations, hindering the ability to expand the utility of QST toward its potential application in clinical settings and clinical predictive value. The aims of this study were to examine pain sensitivity using QST in adolescents with chronic pain compared to adolescents without chronic pain and identify predictors of pain sensitivity. A population-based study conducted from 2010 to 2011 provided data on 941 adolescents, 197 were classified as having chronic pain and 744 were classified without chronic pain. Self-reported data on pain characteristics, psychological functioning, and QST responses were examined. The findings revealed lower pressure pain threshold and tolerance on the trapezius (P's = 0.03) in adolescents with chronic pain compared to adolescents without chronic pain, but no differences on heat or cold-pressor pain tasks. Female sex (P's = 0.02) and poorer psychological functioning (P's = 0.02) emerged as significant predictors of greater pain sensitivity across all pain modalities. Exploratory analyses revealed several associations between clinical pain characteristics and QST responses within the chronic pain cohort. Findings from this large pediatric sample provide comprehensive data that could serve as normative data on QST responses in adolescents with and without chronic pain. These findings lay the groundwork toward developing future QST research and study protocols in pediatric populations, taking into consideration sex and psychological distress.

  10. Estimating glomerular filtration rate in a population-based study

    PubMed Central

    Shankar, Anoop; Lee, Kristine E; Klein, Barbara EK; Muntner, Paul; Brazy, Peter C; Cruickshanks, Karen J; Nieto, F Javier; Danforth, Lorraine G; Schubert, Carla R; Tsai, Michael Y; Klein, Ronald

    2010-01-01

    Background: Glomerular filtration rate (GFR)-estimating equations are used to determine the prevalence of chronic kidney disease (CKD) in population-based studies. However, it has been suggested that since the commonly used GFR equations were originally developed from samples of patients with CKD, they underestimate GFR in healthy populations. Few studies have made side-by-side comparisons of the effect of various estimating equations on the prevalence estimates of CKD in a general population sample. Patients and methods: We examined a population-based sample comprising adults from Wisconsin (age, 43–86 years; 56% women). We compared the prevalence of CKD, defined as a GFR of <60 mL/min per 1.73 m2 estimated from serum creatinine, by applying various commonly used equations including the modification of diet in renal disease (MDRD) equation, Cockcroft–Gault (CG) equation, and the Mayo equation. We compared the performance of these equations against the CKD definition of cystatin C >1.23 mg/L. Results: We found that the prevalence of CKD varied widely among different GFR equations. Although the prevalence of CKD was 17.2% with the MDRD equation and 16.5% with the CG equation, it was only 4.8% with the Mayo equation. Only 24% of those identified to have GFR in the range of 50–59 mL/min per 1.73 m2 by the MDRD equation had cystatin C levels >1.23 mg/L; their mean cystatin C level was only 1 mg/L (interquartile range, 0.9–1.2 mg/L). This finding was similar for the CG equation. For the Mayo equation, 62.8% of those patients with GFR in the range of 50–59 mL/min per 1.73 m2 had cystatin C levels >1.23 mg/L; their mean cystatin C level was 1.3 mg/L (interquartile range, 1.2–1.5 mg/L). The MDRD and CG equations showed a false-positive rate of >10%. Discussion: We found that the MDRD and CG equations, the current standard to estimate GFR, appeared to overestimate the prevalence of CKD in a general population sample. PMID:20730018

  11. Population-based prevention of influenza in Dutch general practice.

    PubMed Central

    Hak, E; Hermens, R P; van Essen, G A; Kuyvenhoven, M M; de Melker, R A

    1997-01-01

    BACKGROUND: Although the effectiveness of influenza vaccination in high-risk groups has been proven, vaccine coverage continues to be less than 50% in The Netherlands. To improve vaccination rates, data on the organizational factors, which should be targeted in population-based prevention of influenza, is essential. AIM: To assess the organizational factors in Dutch general practice, which were associated with the influenza vaccination rate in 1994. METHOD: A retrospective questionnaire study was undertaken in 1586 of the 4758 Dutch general practices, which were randomly selected. A total of 1251 (79%) practices returned a questionnaire. The items verified were practice profile, urbanization, delegation index, use of computer-based patient records, influenza vaccination characteristics and influenza vaccination rate. RESULTS: No differences were found with regard to the percentage of single-handed practices (65%), practices situated in urban area (38%), practices with a pharmacy (12%), patients insured by the National Health Service (59%) and use of computer-based patient records (57%) when compared with national statistics. The mean overall influenza vaccination rate was 9.0% (SD 4.0%). Using a logistic regression analysis, a high vaccination rate (> or = 9%) was associated with the use of personal reminders (odds ratio (OR) 1.7, 1.3-2.2), monitoring patient compliance (OR 1.8, 1.3-2.4), marking risk patients in computer-based patient records (OR 1.3, 1.0-1.6), a small number of patients per full-time practice assistant (OR 1.5, 1.1-1.9), urban areas (OR 1.6, 1.3-2.1) and single-handed practices (OR 1.5, 1.1-1.9). CONCLUSION: Improvement of vaccination rates in high-risk patients may be achievable by promoting the use of personal reminders and computer-based patient records, as well as monitoring patient compliance. In addition, the role of practice assistants with regard to preventive activities should be developed further. Practices situated in rural areas and

  12. Diversity of threshold phenomena in geophysical media

    NASA Astrophysics Data System (ADS)

    Guglielmi, A. V.

    2017-01-01

    The sample analysis of threshold phenomena in the lithosphere, atmosphere, and magnetosphere is conducted. The phenomena due to the flow of electric current and pore fluid in the rocks are considered, the scenario of wind-driven generation of atmospheric electricity is suggested, and the model of the geomagnetic storm time Dst variation is analyzed. An important general conclusion consists in the fact that in the geophysical media there is a wide class of threshold phenomena that are affine with phase transitions of the second kind. These phenomena are also related to the critical transitions in self-oscillatory systems with soft self-excitation. The integral representation of bifurcation diagrams for threshold phenomena is suggested. This provides a simple way to take into account the influence of the fluctuations on the transition of a system through the threshold. Fluctuations remove singularity at the threshold point and, generally, lead to a certain shifting of the threshold. The question concerning the hard transition through the threshold and several aspects of modeling the blow-up instability which is presumed to occasionally develop in the geophysical media are discussed.

  13. Provider communication on perinatal depression: a population-based study.

    PubMed

    Farr, Sherry L; Ko, Jean Y; Burley, Kim; Gupta, Seema

    2016-02-01

    Women's lack of knowledge on symptoms of perinatal depression and treatment resources is a barrier to receiving care. We sought to estimate the prevalence and predictors of discussing depression with a prenatal care provider. We used the 2011 population-based data from 24 sites participating in the Pregnancy Risk Assessment Monitoring System (n = 32,827 women with recent live births) to examine associations between maternal characteristics and report that a prenatal care provider discussed with her what to do if feeling depressed during or after pregnancy. Overall, 71.9 % of women reported discussing perinatal depression with their prenatal care provider (range 60.7 % in New York City to 85.6 % in Maine). Women were more likely to report a discussion on perinatal depression with their provider if they they were 18-29 years of age than over 35 years of age compared to older (adjusted prevalence ratio [aPR] 18 to 19 y = 1.08, 20 to 24 y = 1.10, 25 to 29 y = 1.09), unmarried (aPR = 1.07) compared to married, had <12 years of education (aPR = 1.05) compared to > 12 years, and had no previous live births (aPR = 1.03) compared to ≥ 1 live births. Research is needed on effective ways to educate women about perinatal depression and whether increased knowledge on perinatal depression results in higher rates of treatment and shorter duration of symptoms.

  14. Psoriasis and dyslipidaemia: a population-based study.

    PubMed

    Dreiher, Jacob; Weitzman, Dahlia; Davidovici, Batya; Shapiro, Jonathan; Cohen, Arnon D

    2008-01-01

    Previous reports demonstrated an association between psoriasis and the metabolic syndrome. The aim of this study was to elucidate the association between psoriasis and dyslipidaemia. A cross-sectional study was performed utilizing a population-based database. Psoriasis patients were compared with enrollees without psoriasis regarding the prevalence of dyslipidaemia and lipid levels. Comparison of lipid levels was performed on a "low-risk" subset of subjects without diabetes, hypertension and cardiovascular disease. The study included 10,669 psoriasis patients and 22,996 subjects without psoriasis. The prevalence of dyslipidaemia was significantly higher in psoriasis patients (odds ratio (OR) = 1.48, 95% confidence interval (CI) 1.40-1.55). The association remained significant after controlling for confounders (OR = 1.19, 95% CI 1.12-1.26, p < 0.001). In multivariate analysis of the "low-risk" subset, triglyceride levels were higher in psoriasis patients and high-density lipoprotein cholesterol levels were lower. This study supports previous reports of an association between psoriasis and lipid abnormalities.

  15. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  16. Stress and dysmenorrhoea: a population based prospective study

    PubMed Central

    Wang, L; Wang, X; Wang, W; Chen, C; Ronnennberg, A; Guang, W; Huang, A; Fang, Z; Zang, T; Wang, L; Xu, X

    2004-01-01

    Background: Dysmenorrhoea is the most common gynaecological disorder in women of reproductive age. Despite the association between stress and pregnancy outcomes, few studies have examined the possible link between stress and dysmenorrhoea. Aims and Methods: Using a population based cohort of Chinese women, the independent effect of women's perceived stress in the preceding menstrual cycle on the incidence of dysmenorrhoea in the subsequent cycle was investigated prospectively. The analysis included 1160 prospectively observed menstrual cycles from 388 healthy, nulliparous, newly married women who intended to conceive. The perception of stress and the occurrence of dysmenorrhoea in each menstrual cycle were determined from daily diaries recorded by the women. Results: After adjustment for important covariates, the risk of dysmenorrhoea was more than twice as great among women with high stress compared to those with low stress in the preceding cycle (OR = 2.4; 95% CI 1.4 to 4.3). The risk of dysmenorrhoea was greatest among women with both high stress and a history of dysmenorrhoea compared to women with low stress and no history of dysmenorrhoea (OR = 10.4, 95% CI 4.9 to 22.3). Stress in the follicular phase of the preceding cycles had a stronger association with dysmenorrhoea than stress in the luteal phase of the preceding cycles. Conclusion: This study shows a significant association between stress and the incidence of dysmenorrhoea, which is even stronger among women with a history of dysmenorrhoea. PMID:15550609

  17. Unbiased methods for population-based association studies.

    PubMed

    Devlin, B; Roeder, K; Bacanu, S A

    2001-12-01

    Large, population-based samples and large-scale genotyping are being used to evaluate disease/gene associations. A substantial drawback to such samples is the fact that population substructure can induce spurious associations between genes and disease. We review two methods, called genomic control (GC) and structured association (SA), that obviate many of the concerns about population substructure by using the features of the genomes present in the sample to correct for stratification. The GC approach exploits the fact that population substructure generates "over dispersion" of statistics used to assess association. By testing multiple polymorphisms throughout the genome, only some of which are pertinent to the disease of interest, the degree of overdispersion generated by population substructure can be estimated and taken into account. The SA approach assumes that the sampled population, although heterogeneous, is composed of subpopulations that are themselves homogeneous. By using multiple polymorphisms throughout the genome, this "latent class method" estimates the probability sampled individuals derive from each of these latent subpopulations. GC has the advantage of robustness, simplicity, and wide applicability, even to experimental designs such as DNA pooling. SA is a bit more complicated but has the advantage of greater power in some realistic settings, such as admixed populations or when association varies widely across subpopulations. It, too, is widely applicable. Both also have weaknesses, as elaborated in our review.

  18. Central poststroke pain: a population-based study.

    PubMed

    Klit, Henriette; Finnerup, Nanna Brix; Andersen, Grethe; Jensen, Troels Staehelin

    2011-04-01

    Central poststroke pain (CPSP) is a specific pain condition arising as a direct consequence of a cerebrovascular lesion. There is limited knowledge about the epidemiology and clinical characteristics of this often neglected but important consequence of stroke. In this population-based study, a questionnaire was sent out to all (n=964) stroke patients identified through the Danish National Indicator Project Stroke Database in Aarhus County, Denmark, between March 2004 and February 2005. All surviving patients who fulfilled 4 questionnaire criteria for possible CPSP (n=51) were selected for further clinical examination, and their pain was classified by using stringent and well-defined criteria and a detailed, standardized clinical examination. The minimum prevalence of definite or probable CPSP in this population is 7.3% and the prevalence of CPSP-like dysesthesia or pain is 8.6%. Pinprick hyperalgesia was present in 57%, cold allodynia in 40%, and brush-evoked dysesthesia in 51% of patients with CPSP. Because of its negative impact on quality of life and rehabilitation, pain is an important symptom to assess in stroke survivors.

  19. Existence of a light intensity threshold for photoconversion processes

    SciTech Connect

    Gregg, B.A.; Nozik, A.J. )

    1993-12-23

    Two models of the mechanism of photoinduced electron transfer at semiconductor surfaces have long been differentiated by their prediction, or their denial, of the existence of a light intensity threshold for fuel-forming photoconversion processes. We attempt to clarify this problem by making a distinction between two possible types of thresholds: a threshold for incipient product formation and a threshold for product formation in a specified state, such as its standard state. A light intensity threshold for incipient product formation appears to be forbidden by molecular electron-transfer theory and has apparently never been observed. Conversely, a light intensity threshold for product formation in its standard state must always occur, simply because the product concentration must first build up from its equilibrium value to its standard-state value. Since the former threshold is forbidden, while the latter is unavoidable, the existence of a threshold cannot be used to distinguish between the models. 20 refs.

  20. Defining the biomechanical and biological threshold of murine mild traumatic brain injury using CHIMERA (Closed Head Impact Model of Engineered Rotational Acceleration).

    PubMed

    Namjoshi, Dhananjay R; Cheng, Wai Hang; Bashir, Asma; Wilkinson, Anna; Stukas, Sophie; Martens, Kris M; Whyte, Tom; Abebe, Zelalem A; McInnes, Kurt A; Cripton, Peter A; Wellington, Cheryl L

    2017-03-05

    CHIMERA (Closed Head Impact Model of Engineered Rotational Acceleration) is a recently described animal model of traumatic brain injury (TBI) that primarily produces diffuse axonal injury (DAI) characterized by white matter inflammation and axonal damage. CHIMERA was specifically designed to reliably generate a variety of TBI severities using precise and quantifiable biomechanical inputs in a nonsurgical user-friendly platform. The objective of this study was to define the lower limit of single impact mild TBI (mTBI) using CHIMERA by characterizing the dose-response relationship between biomechanical input and neurological, behavioral, neuropathological and biochemical outcomes. Wild-type male mice were subjected to a single CHIMERA TBI using six impact energies ranging from 0.1 to 0.7J, and post-TBI outcomes were assessed over an acute period of 14days. Here we report that single TBI using CHIMERA induces injury dose- and time-dependent changes in behavioral and neurological deficits, axonal damage, white matter tract microgliosis and astrogliosis. Impact energies of 0.4J or below produced no significant phenotype (subthreshold), 0.5J led to significant changes for one or more phenotypes (threshold), and 0.6 and 0.7J resulted in significant changes in all outcomes assessed (mTBI). We further show that linear head kinematics are the most robust predictors of duration of unconsciousness, severity of neurological deficits, white matter injury, and microgliosis following single TBI. Our data extend the validation of CHIMERA as a biofidelic animal model of DAI and establish working parameters to guide future investigations of the mechanisms underlying axonal pathology and inflammation induced by mechanical trauma.

  1. Long-term daily vibration exposure alters current perception threshold (CPT) sensitivity and myelinated axons in a rat-tail model of vibration-induced injury.

    PubMed

    Krajnak, Kristine; Raju, Sandya G; Miller, G Roger; Johnson, Claud; Waugh, Stacey; Kashon, Michael L; Riley, Danny A

    2016-01-01

    Repeated exposure to hand-transmitted vibration through the use of powered hand tools may result in pain and progressive reductions in tactile sensitivity. The goal of the present study was to use an established animal model of vibration-induced injury to characterize changes in sensory nerve function and cellular mechanisms associated with these alterations. Sensory nerve function was assessed weekly using the current perception threshold test and tail-flick analgesia test in male Sprague-Dawley rats exposed to 28 d of tail vibration. After 28 d of exposure, Aβ fiber sensitivity was reduced. This reduction in sensitivity was partly attributed to structural disruption of myelin. In addition, the decrease in sensitivity was also associated with a reduction in myelin basic protein and 2',3'- cyclic nucleotide phosphodiasterase (CNPase) staining in tail nerves, and an increase in circulating calcitonin gene-related peptide (CGRP) concentrations. Changes in Aβ fiber sensitivity and CGRP concentrations may serve as early markers of vibration-induced injury in peripheral nerves. It is conceivable that these markers may be utilized to monitor sensorineural alterations in workers exposed to vibration to potentially prevent additional injury.

  2. Real external predictivity of QSAR models. Part 2. New intercomparable thresholds for different validation criteria and the need for scatter plot inspection.

    PubMed

    Chirico, Nicola; Gramatica, Paola

    2012-08-27

    The evaluation of regression QSAR model performance, in fitting, robustness, and external prediction, is of pivotal importance. Over the past decade, different external validation parameters have been proposed: Q(F1)(2), Q(F2)(2), Q(F3)(2), r(m)(2), and the Golbraikh-Tropsha method. Recently, the concordance correlation coefficient (CCC, Lin), which simply verifies how small the differences are between experimental data and external data set predictions, independently of their range, was proposed by our group as an external validation parameter for use in QSAR studies. In our preliminary work, we demonstrated with thousands of simulated models that CCC is in good agreement with the compared validation criteria (except r(m)(2)) using the cutoff values normally applied for the acceptance of QSAR models as externally predictive. In this new work, we have studied and compared the general trends of the various criteria relative to different possible biases (scale and location shifts) in external data distributions, using a wide range of different simulated scenarios. This study, further supported by visual inspection of experimental vs predicted data scatter plots, has highlighted problems related to some criteria. Indeed, if based on the cutoff suggested by the proponent, r(m)(2) could also accept not predictive models in two of the possible biases (location, location plus scale), while in the case of scale shift bias, it appears to be the most restrictive. Moreover, Q(F1)(2) and Q(F2)(2) showed some problems in one of the possible biases (scale shift). This analysis allowed us to also propose recalibrated, and intercomparable for the same data scatter, new thresholds for each criterion in defining a QSAR model as really externally predictive in a more precautionary approach. An analysis of the results revealed that the scatter plot of experimental vs predicted external data must always be evaluated to support the statistical criteria values: in some cases high

  3. Statin Use Reduces Prostate Cancer All-Cause Mortality: A Nationwide Population-Based Cohort Study.

    PubMed

    Sun, Li-Min; Lin, Ming-Chia; Lin, Cheng-Li; Chang, Shih-Ni; Liang, Ji-An; Lin, I-Ching; Kao, Chia-Hung

    2015-09-01

    Studies have suggested that statin use is related to cancer risk and prostate cancer mortality. We conducted a population-based cohort study to determine whether using statins in prostate cancer patients is associated with reduced all-cause mortality rates. Data were obtained from the Taiwan National Health Insurance Research Database. The study cohort comprised 5179 patients diagnosed with prostate cancer who used statins for at least 6 months between January 1, 1998 and December 31, 2010. To form a comparison group, each patient was randomly frequency-matched (according to age and index date) with a prostate cancer patient who did not use any type of statin-based drugs during the study period. The study endpoint was mortality. The hazard ratio (HR) and 95% confidence interval (CI) were estimated using Cox regression models. Among prostate cancer patients, statin use was associated with significantly decreased all-cause mortality (adjusted HR = 0.65; 95% CI = 0.60-0.71). This phenomenon was observed among various types of statin, age groups, and treatment methods. Analyzing the defined daily dose of statins indicated that both low- and high-dose groups exhibited significantly decreased death rates compared with nonusers, suggesting a dose-response relationship. The results of this population-based cohort study suggest that using statins reduces all-cause mortality among prostate cancer patients, and a dose-response relationship may exist.

  4. On estimation of time-dependent attributable fraction from population-based case-control studies.

    PubMed

    Zhao, Wei; Chen, Ying Qing; Hsu, Li

    2017-01-18

    Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer.

  5. The odderon versus a new threshold

    SciTech Connect

    Kang, K. . Dept. of Physics); White, A.R. )

    1991-10-01

    We show that a new threshold model with a threshold close to but below the UA4 energy is compatible with all forward elastic scattering data at high energies including the widely known UA4 measurement of the forward real part of the elastic p{bar p} scattering amplitude and the recent Fermilab Tevatron Collider experiments of the p{bar p} total cross-section. 14 refs.

  6. Medullary carcinoma of the large intestine: a population based analysis.

    PubMed

    Thirunavukarasu, Pragatheeshwar; Sathaiah, Magesh; Singla, Smit; Sukumar, Shyam; Karunamurthy, Arivarasan; Pragatheeshwar, Kothai Divya; Lee, Kenneth K W; Zeh, Herbert; Kane, Kevin M; Bartlett, David L

    2010-10-01

    Medullary carcinoma (MC) of the colorectum is a relatively new histological type of adenocarcinoma characterized by poor glandular differentiation and intraepithelial lymphocytic infiltrate. To date, there has been no epidemiological study of this rare tumor type, which has now been incorporated as a separate entity in the World Health Organization (WHO) classification of colorectal cancers. We used the population-based registries of the Surveillance, Epidemiology and End Results (SEER) database to identify all cases of colorectal MC between 1973 and 2006 and compared them to poorly and undifferentiated colonic adenocarcinomas (PDA and UDA, respectively). We observed that MCs were rare tumors, constituting approximately 5-8 cases for every 10,000 colon cancers diagnosed, with a mean annual incidence of 3.47 (+/-0.75) per 10 million population. Mean age at diagnosis was 69.3 (+/-12.5) years, with incidence increasing with age. MCs were twice as common in females, who presented at a later age, with a lower stage and a trend towards favorable prognosis. MCs were extremely rare among African-Americans. MCs were most common in the proximal colon (74%), where they present at a later age than the sigmoid colon. There were no cases reliably identified in the rectum or appendix. Serum carcinoembryonic antigen levels (CEA) were elevated prior to first course of treatment in 40% of the patients. MCs were more commonly poorly differentiated (72%), with 22% being undifferentiated. MCs commonly presented with Stage II disease, with 10% presenting with metastases. Only one patient presented with N2b disease (>7 positive nodes). Early outcome analyses showed that MCs have 1- and 2-year relative survival rates of 92.7 and 73.8% respectively. Although MCs showed a trend towards better early overall survival, undifferentiated MCs present more commonly with Stage III, with comparatively worse early outcomes.

  7. Recurrence of hyperemesis gravidarum across generations: population based cohort study

    PubMed Central

    Skjærven, Rolv; Grjibovski, Andrej M; Gunnes, Nina; Vangen, Siri; Magnus, Per

    2010-01-01

    Objective To estimate the risk of hyperemesis gravidarum (hyperemesis) according to whether the daughters and sons under study were born after pregnancies complicated by hyperemesis. Design Population based cohort study. Setting Registry data from Norway. Participants Linked generational data from the medical birth registry of Norway (1967-2006): 544 087 units of mother and childbearing daughter and 399 777 units of mother and child producing son. Main outcome measure Hyperemesis in daughters in mother and childbearing daughter units and hyperemesis in female partners of sons in mother and child producing son units. Results Daughters who were born after a pregnancy complicated by hyperemesis had a 3% risk of having hyperemesis in their own pregnancy, while women who were born after an unaffected pregnancy had a risk of 1.1% (unadjusted odds ratio 2.9, 95% confidence interval 2.4 to 3.6). Female partners of sons who were born after pregnancies complicated by hyperemesis had a risk of 1.2% (1.0, 0.7 to 1.6). Daughters born after a pregnancy not complicated by hyperemesis had an increased risk of the condition if the mother had hyperemesis in a previous or subsequent pregnancy (3.2 (1.6 to 6.4) if hyperemesis had occurred in one of the mother’s previous pregnancies and 3.7 (1.5 to 9.1) if it had occurred in a later pregnancy). Adjustment for maternal age at childbirth, period of birth, and parity did not change the estimates. Restrictions to firstborns did not influence the results. Conclusions Hyperemesis gravidarum is more strongly influenced by the maternal genotype than the fetal genotype, though environmental influences along the maternal line cannot be excluded as contributing factors. PMID:21030362

  8. Network problem threshold

    NASA Technical Reports Server (NTRS)

    Gejji, Raghvendra, R.

    1992-01-01

    Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.

  9. Vision thresholds revisited

    NASA Astrophysics Data System (ADS)

    Garstang, R. H.

    1999-05-01

    During and just after World War II there was intense interest in the threshold for seeing faint sources against illuminated backgrounds. Knoll, Tousey and Hulburt (1946, 1948) determined the threshold for (effectively) point sources seen against backgrounds ranging in brightness from darkness to subdued daylight. Blackwell (1946) gave contrast ratios for sources of various sizes ranging from point sources up to circular disks of 6 degrees diameter, all seen against the same range of brightnesses, and determined by a very large number of visual observations made by a team of observers. I have combined the two sets of results, and represented them by an improvement on the theoretical formula for threshold illuminance as a function of background brightness which was suggested by Hecht (1934). My formula agrees very well with the observations, and is very suitable for incorporation into computer programs. Applications have been made to problems where the background brightness is caused by light pollution, and the source size is determined by the seeing. These include the optimum magnification and limiting magnitude of telescopes, and the analysis of visual limiting magnitudes determined by Bowen (1947) to determine the night sky brightness at Mount Wilson in 1947.

  10. Deterministic estimation of hydrological thresholds for shallow landslide initiation and slope stability models: case study from the Somma-Vesuvius area of southern Italy

    USGS Publications Warehouse

    Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.

    2012-01-01

    interrupted. These results lead to the identification of a comprehensive hydrogeomorphological model of susceptibility to initial landslides that links morphological, stratigraphical and hydrological conditions. The calculation of intensities and durations of rainfall necessary for slope instability allowed the identification of deterministic hydrological thresholds that account for uncertainty in properties and observed rainfall intensities.

  11. Threshold altitude resulting in decompression sickness

    NASA Technical Reports Server (NTRS)

    Kumar, K. V.; Waligora, James M.; Calkins, Dick S.

    1990-01-01

    A review of case reports, hypobaric chamber training data, and experimental evidence indicated that the threshold for incidence of altitude decompression sickness (DCS) was influenced by various factors such as prior denitrogenation, exercise or rest, and period of exposure, in addition to individual susceptibility. Fitting these data with appropriate statistical models makes it possible to examine the influence of various factors on the threshold for DCS. This approach was illustrated by logistic regression analysis on the incidence of DCS below 9144 m. Estimations using these regressions showed that, under a noprebreathe, 6-h exposure, simulated EVA profile, the threshold for symptoms occurred at approximately 3353 m; while under a noprebreathe, 2-h exposure profile with knee-bends exercise, the threshold occurred at 7925 m.

  12. Population-Based Incidence and Prevalence of Systemic Lupus Erythematosus

    PubMed Central

    Somers, Emily C.; Marder, Wendy; Cagnoli, Patricia; Lewis, Emily E.; DeGuire, Peter; Gordon, Caroline; Helmick, Charles G.; Wang, Lu; Wing, Jeffrey J.; Dhar, J. Patricia; Leisen, James; Shaltis, Diane; McCune, W. Joseph

    2014-01-01

    Objective To estimate the incidence and prevalence of systemic lupus erythematosus (SLE) in a sociodemographically diverse southeastern Michigan source population of 2.4 million people. Methods SLE cases fulfilling the American College of Rheumatology classification criteria (primary case definition) or meeting rheumatologist-judged SLE criteria (secondary definition) and residing in Wayne or Washtenaw Counties during 2002–2004 were included. Case finding was performed from 6 source types, including hospitals and private specialists. Age-standardized rates were computed, and capture–recapture was performed to estimate underascertainment of cases. Results The overall age-adjusted incidence and prevalence (ACR definition) per 100,000 persons were 5.5 (95% confidence interval [95% CI] 5.0–6.1) and 72.8 (95% CI 70.8–74.8). Among females, the incidence was 9.3 per 100,000 persons and the prevalence was 128.7 per 100,000 persons. Only 7 cases were estimated to have been missed by capture–recapture, adjustment for which did not materially affect the rates. SLE prevalence was 2.3-fold higher in black persons than in white persons, and 10-fold higher in females than in males. Among incident cases, the mean ± SD age at diagnosis was 39.3 ± 16.6 years. Black SLE patients had a higher proportion of renal disease and end-stage renal disease (ESRD) (40.5% and 15.3%, respectively) as compared to white SLE patients (18.8% and 4.5%, respectively). Black patients with renal disease were diagnosed as having SLE at younger age than white patients with renal disease (mean ± SD 34.4 ± 14.9 years versus 41.9 ± 21.3 years; P = 0.05). Conclusion SLE prevalence was higher than has been described in most other population-based studies and reached 1 in 537 among black female persons. There were substantial racial disparities in the burden of SLE, with black patients experiencing earlier age at diagnosis, >2-fold increases in SLE incidence and prevalence, and increased

  13. Cyberbullying among Finnish adolescents – a population-based study

    PubMed Central

    2012-01-01

    Background Cyberbullying, threatening or harassing another via the internet or mobile phones, does not cause physically harm and thus the consequences are less visible. Little research has been performed on the occurrence of cyberbullying among adolescents or the perception of its seriousness. Only a few population-based studies have been published, none of which included research on the witnessing of cyberbullying. Here, we examined exposure to cyberbullying during the last year, and its frequency and perceived seriousness among 12 to 18-year-old adolescents in Finland. We studied four dimensions of cyberbullying: being a victim, bully, or both victim and bully of cyberbullying, and witnessing the cyberbullying of friends. Methods Self-administered questionnaires, including four questions on cyberbullying, were mailed to a representative sample of 12-, 14-, 16-, and 18-year-old Finns in 2009 (the Adolescent Health and Lifestyle Survey). The respondents could answer via the internet or paper questionnaire. Results The number of respondents was 5516 and the response rate was 56%. Girls more often than boys reported experiencing at least one dimension of cyberbullying during the last year. The proportion was highest among 14-year-olds and lowest among 18-year-olds of both sexes. Among girls, the most commonly encountered dimension was witnessing the cyberbullying of friends (16%); and being a victim was slightly more common than being a bully (11% vs. 9%). Among boys, an equal proportion, approximately 10%, had been a victim, a bully, or had witnessed cyberbullying. The proportion of bully-victims was 4%. Serious and disruptive cyberbullying was experienced by 2% of respondents and weekly cyberbullying by 1%; only 0.5% of respondents had been bullied weekly and considered bullying serious and disruptive. Conclusions Adolescents are commonly exposed to cyberbullying, but it is rarely frequent or considered serious or disruptive. Cyberbullying exposure differed between

  14. A systematic review of economic evaluations of population-based sodium reduction interventions

    PubMed Central

    Hope, Silvia F.; Webster, Jacqui; Trieu, Kathy; Pillay, Arti; Ieremia, Merina; Bell, Colin; Snowdon, Wendy; Neal, Bruce; Moodie, Marj

    2017-01-01

    Objective To summarise evidence describing the cost-effectiveness of population-based interventions targeting sodium reduction. Methods A systematic search of published and grey literature databases and websites was conducted using specified key words. Characteristics of identified economic evaluations were recorded, and included studies were appraised for reporting quality using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. Results Twenty studies met the study inclusion criteria and received a full paper review. Fourteen studies were identified as full economic evaluations in that they included both costs and benefits associated with an intervention measured against a comparator. Most studies were modelling exercises based on scenarios for achieving salt reduction and assumed effects on health outcomes. All 14 studies concluded that their specified intervention(s) targeting reductions in population sodium consumption were cost-effective, and in the majority of cases, were cost saving. Just over half the studies (8/14) were assessed as being of ‘excellent’ reporting quality, five studies fell into the ‘very good’ quality category and one into the ‘good’ category. All of the identified evaluations were based on modelling, whereby inputs for all the key parameters including the effect size were either drawn from published datasets, existing literature or based on expert advice. Conclusion Despite a clear increase in evaluations of salt reduction programs in recent years, this review identified relatively few economic evaluations of population salt reduction interventions. None of the studies were based on actual implementation of intervention(s) and the associated collection of new empirical data. The studies universally showed that population-based salt reduction strategies are likely to be cost effective or cost saving. However, given the reliance on modelling, there is a need for the effectiveness of new

  15. Threshold Concepts in Research Education and Evidence of Threshold Crossing

    ERIC Educational Resources Information Center

    Kiley, Margaret; Wisker, Gina

    2009-01-01

    Most work on threshold concepts has hitherto related to discipline-specific undergraduate education, however, the idea of generic doctoral-level threshold concepts appeared to us to provide a strong and useful framework to support research learning and teaching at the graduate level. The early work regarding research-level threshold concepts is…

  16. Thresholded Power law Size Distributions of Instabilities in Astrophysics

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2015-11-01

    Power-law-like size distributions are ubiquitous in astrophysical instabilities. There are at least four natural effects that cause deviations from ideal power law size distributions, which we model here in a generalized way: (1) a physical threshold of an instability; (2) incomplete sampling of the smallest events below a threshold x0; (3) contamination by an event-unrelated background xb; and (4) truncation effects at the largest events due to a finite system size. These effects can be modeled in the simplest terms with a “thresholded power law” distribution function (also called generalized Pareto [type II] or Lomax distribution), N(x){dx}\\propto {(x+{x}0)}-a{dx}, where x0 > 0 is positive for a threshold effect, while x0 < 0 is negative for background contamination. We analytically derive the functional shape of this thresholded power law distribution function from an exponential growth evolution model, which produces avalanches only when a disturbance exceeds a critical threshold x0. We apply the thresholded power law distribution function to terrestrial, solar (HXRBS, BATSE, RHESSI), and stellar flare (Kepler) data sets. We find that the thresholded power law model provides an adequate fit to most of the observed data. Major advantages of this model are the automated choice of the power law fitting range, diagnostics of background contamination, physical instability thresholds, instrumental detection thresholds, and finite system size limits. When testing self-organized criticality models that predict ideal power laws, we suggest including these natural truncation effects.

  17. Can we infer the magma overpressure threshold before an eruption? Insights from ground deformation time series and numerical modeling of reservoir failure.

    NASA Astrophysics Data System (ADS)

    Albino, F.; Gregg, P. M.; Amelug, F.

    2015-12-01

    Overpressure within a magma chamber is a key parameter to understanding the onset of an eruption. Recent investigations indicate that surface inflation at a volcanic edifice does not always precede eruption (Chaussard and Amelung, 2012; Biggs et al., 2014), suggesting that the overpressure threshold may differ between volcanoes. To understand the failure conditions of a magma reservoir, mechanical models were developed to quantify the range of overpressure affordable in a reservoir for a given situation. Even if the choice of the failure criterion is still debated, most investigators agree that the overpressure required to fail the magma reservoir is at first order a function of the crustal stress field and the shape of the magma reservoir. Radar interferometry (InSAR) provides a large dataset of ground deformation worldwide, but many of these InSAR studies continue to use point or dislocation sources (Mogi, Okada) to explain deformation on volcanoes. Even if these simple solutions often fit the data and estimate the depth and the volume change of the source of deformation, key parameters such as the magma overpressure or the mechanical properties of the rocks cannot be derived. We use mechanical numerical models of reservoir failure combined with ground deformation data. It has been observed that volume change before an eruption can easily range one or two order of magnitude from 1-100x106 m3. The first goal of this study is to understand which parameter(s) control the critical volume changes just before the failure of the reservoir. First, a parametric study is performed to quantify the effect of the geometry of the reservoir (radius, depth), the local stress (compressive/extensive) and even the crust rheology (elastic/viscoelastic). We then compare modeling results with several active volcanoes where long time series of volume change are available: Okmok and Westdahl in Alaska, Sinabung and Agung in Indonesia and Galapagos. For each case, the maximum

  18. Percolation Threshold in Polycarbonate Nanocomposites

    NASA Astrophysics Data System (ADS)

    Ahuja, Suresh

    2014-03-01

    Nanocomposites have unique mechanical, electrical, magnetic, optical and thermal properties. Many methods could be applied to prepare polymer-inorganic nanocomposites, such as sol-gel processing, in-situ polymerization, particle in-situ formation, blending, and radiation synthesis. The analytical composite models that have been put forth include Voigt and Reuss bounds, Polymer nanocomposites offer the possibility of substantial improvements in material properties such as shear and bulk modulus, yield strength, toughness, film scratch resistance, optical properties, electrical conductivity, gas and solvent transport, with only very small amounts of nanoparticles Experimental results are compared against composite models of Hashin and Shtrikman bounds, Halpin-Tsai model, Cox model, and various Mori and Tanaka models. Examples of numerical modeling are molecular dynamics modeling and finite element modeling of reduced modulus and hardness that takes into account the modulus of the components and the effect of the interface between the hard filler and relatively soft polymer, polycarbonate. Higher nanoparticle concentration results in poor dispersion and adhesion to polymer matrix which results in lower modulus and hardness and departure from the existing composite models. As the level of silica increases beyond a threshold level, aggregates form which results in weakening of the structure. Polymer silica interface is found to be weak as silica is non-interacting promoting interfacial slip at silica-matrix junctions. Our experimental results compare favorably with those of nanocomposites of polyesters where the effect of nanoclay on composite hardness and modulus depended on dispersion of nanoclay in polyester.

  19. An in vitro corneal model with a laser damage threshold at 2 μm that is similar to that in the rabbit

    NASA Astrophysics Data System (ADS)

    Foltz, Michael S.; Denton, Michael L.; Schuster, Kurt J.; Estlack, Larry E.; Kumru, Semih S.

    2008-02-01

    Corneal organotypic cultures were generated as per existing methods, which included growth on polycarbonate inserts and air-lifting for one week. The corneal simulant cultures were exposed, with real-time IR imaging, to the 2-μm wavelength output of a thulium fiber laser with 4 mm beam diameter for 0.25 seconds in a thermally controlled environment and then assayed for damage. The in vitro threshold (ED 50 value of 12.5 W/cm2) and peak temperature (74.5 °C) at threshold irradiance are compared with rabbit corneal data in the literature.

  20. Waiting time disparities in breast cancer diagnosis and treatment: a population-based study in France.

    PubMed

    Molinié, F; Leux, C; Delafosse, P; Ayrault-Piault, S; Arveux, P; Woronoff, A S; Guizard, A V; Velten, M; Ganry, O; Bara, S; Daubisse-Marliac, L; Tretarre, B

    2013-10-01

    Waiting times are key indicators of a health's system performance, but are not routinely available in France. We studied waiting times for diagnosis and treatment according to patients' characteristics, tumours' characteristics and medical management options in a sample of 1494 breast cancers recorded in population-based registries. The median waiting time from the first imaging detection to the treatment initiation was 34 days. Older age, co-morbidity, smaller size of tumour, detection by organised screening, biopsy, increasing number of specimens removed, multidisciplinary consulting meetings and surgery as initial treatment were related to increased waiting times in multivariate models. Many of these factors were related to good practices guidelines. However, the strong influence of organised screening programme and the disparity of waiting times according to geographical areas were of concern. Better scheduling of diagnostic tests and treatment propositions should improve waiting times in the management of breast cancer in France.

  1. Optical thresholding and Max Operation

    DTIC Science & Technology

    Thresholding and Max operations are essential elements in the implementation of neural networks. Although there have been several optical...implementations of neural networks, the thresholding functions are performed electronically. Optical thresholding and Max operations have the advantages of...we propose and study the properties of self-oscillation in nonlinear optical (NLO) four-wave mixing (FWM) and NLO resonators for parallel optical thresholding and Max operation.

  2. Thermal pulse damage thresholds in cadmium telluride.

    PubMed

    Slattery, J E; Thompson, J S; Schroeder, J B

    1975-09-01

    A model is presented for predicting the temperature rise in an opaque material during the absorption of a moderately short pulse of energy. Experimental verification of the model employing a pulsed ruby laser and a cadmium telluride plate is described. Two distinct damage thresholds were noted: (1) at modest energy levels plastic deformation occurred, and (2) the higher energies resulted in surface melting.

  3. Interaction of a Cannabinoid-2 Agonist With Tramadol on Nociceptive Thresholds and Immune Responses in a Rat Model of Incisional Pain.

    PubMed

    Stachtari, Chrysoula C; Thomareis, Olympia N; Tsaousi, Georgia G; Karakoulas, Konstantinos A; Chatzimanoli, Foteini I; Chatzopoulos, Stavros A; Vasilakos, Dimitrios G

    The aim of this study was to elucidate the antinociceptive interaction between cannabinoids and tramadol and their impact on proinflammatory response, in terms of serum intereleukin-6 (IL-6) and interleukin-2 (IL-2) release, in a rat model of incisional pain. Prospective randomized trial assessing the individual or combined application of intraperitoneal tramadol (10 mg/kg) and the selective cannabinoid-2 (CB-2) agonist (R,S)-AM1241 (1 mg/kg) applied postsurgical stress stimulus. Pharmacological specificity was established by antagonizing tramadol with naloxone (0.3 mg/kg) and (R,S)-AM1241 with SR144528 (1 mg/kg). Thermal allodynia was assessed by hot plate test 30 (T30), 60 (T60), and 120 (T120) minutes after incision. Blood samples for plasma IL-6 and IL-2 level determination were obtained 2 hours after incision. Data from 42 rats were included in the final analyses. Significant augmentation of thermal threshold was observed at all time points, after administration of either tramadol or (R,S)-AM1241 compared with the control group (P = 0.004 and P = 0.015, respectively). The combination of (R,S)-AM1241 plus tramadol promoted the induced antinociception in an important manner compared with control (P = 0.002) and (R,S)-AM1241 (P = 0.022) groups. Although the antiallodynic effect produced by tramadol was partially reversed by naloxone 30 and 60 minutes after incision (P = 0.028 and P = 0.016, respectively), SR144528 blocked the effects of (R,S)-AM1241 administration in a significant manner (P = 0.001) at all time points. Similarly, naloxone plus SR144528 also blocked the effects of the combination of (R,S)-AM1241 with tramadol at all time points (P = 0.000). IL-6 level in (R,S)-AM1241 plus tramadol group was significantly attenuated compared with control group (P = 0.000). Nevertheless, IL-2 levels remained unchanged in all experimental groups. It seems that the concomitant administration of a selective CB-2 agonist with tramadol in incisional pain model may

  4. Initiation Pressure Thresholds from Three Sources

    SciTech Connect

    Souers, P C; Vitello, P

    2007-02-28

    Pressure thresholds are minimum pressures needed to start explosive initiation that ends in detonation. We obtain pressure thresholds from three sources. Run-to-detonation times are the poorest source but the fitting of a function gives rough results. Flyer-induced initiation gives the best results because the initial conditions are the best known. However, very thick flyers are needed to give the lowest, asymptotic pressure thresholds used in modern models and this kind of data is rarely available. Gap test data is in much larger supply but the various test sizes and materials are confusing. We find that explosive pressures are almost the same if the distance in the gap test spacers are in units of donor explosive radius. Calculated half-width time pulses in the spacers may be used to create a pressure-time curve similar to that of the flyers. The very-large Eglin gap tests give asymptotic thresholds comparable to extrapolated flyer results. The three sources are assembled into a much-expanded set of near-asymptotic pressure thresholds. These thresholds vary greatly with density: for TATB/LX-17/PBX 9502, we find values of 4.9 and 8.7 GPa at 1.80 and 1.90 g/cm{sup 3}, respectively.

  5. Laser threshold magnetometry

    NASA Astrophysics Data System (ADS)

    Jeske, Jan; Cole, Jared H.; Greentree, Andrew D.

    2016-01-01

    We propose a new type of sensor, which uses diamond containing the optically active nitrogen-vacancy (NV-) centres as a laser medium. The magnetometer can be operated at room-temperature and generates light that can be readily fibre coupled, thereby permitting use in industrial applications and remote sensing. By combining laser pumping with a radio-frequency Rabi-drive field, an external magnetic field changes the fluorescence of the NV- centres. We use this change in fluorescence level to push the laser above threshold, turning it on with an intensity controlled by the external magnetic field, which provides a coherent amplification of the readout signal with very high contrast. This mechanism is qualitatively different from conventional NV--based magnetometers which use fluorescence measurements, based on incoherent photon emission. We term our approach laser threshold magnetometer (LTM). We predict that an NV--based LTM with a volume of 1 mm3 can achieve shot-noise limited dc sensitivity of 1.86 fT /\\sqrt{{{Hz}}} and ac sensitivity of 3.97 fT /\\sqrt{{{Hz}}}.

  6. High-Resolution Association Mapping of Quantitative Trait Loci: A Population-Based Approach

    PubMed Central

    Fan, Ruzong; Jung, Jeesun; Jin, Lei

    2006-01-01

    In this article, population-based regression models are proposed for high-resolution linkage disequilibrium mapping of quantitative trait loci (QTL). Two regression models, the “genotype effect model” and the “additive effect model,” are proposed to model the association between the markers and the trait locus. The marker can be either diallelic or multiallelic. If only one marker is used, the method is similar to a classical setting by Nielsen and Weir, and the additive effect model is equivalent to the haplotype trend regression (HTR) method by Zaykin et al. If two/multiple marker data with phase ambiguity are used in the analysis, the proposed models can be used to analyze the data directly. By analytical formulas, we show that the genotype effect model can be used to model the additive and dominance effects simultaneously; the additive effect model takes care of the additive effect only. On the basis of the two models, F-test statistics are proposed to test association between the QTL and markers. By a simulation study, we show that the two models have reasonable type I error rates for a data set of moderate sample size. The noncentrality parameter approximations of F-test statistics are derived to make power calculation and comparison. By a simulation study, it is found that the noncentrality parameter approximations of F-test statistics work very well. Using the noncentrality parameter approximations, we compare the power of the two models with that of the HTR. In addition, a simulation study is performed to make a comparison on the basis of the haplotype frequencies of 10 SNPs of angiotensin-1 converting enzyme (ACE) genes. PMID:16172503

  7. Predictors of Cerebral Palsy in Very Preterm Infants: The EPIPAGE Prospective Population-Based Cohort Study

    ERIC Educational Resources Information Center

    Beaino, Ghada; Khoshnood, Babak; Kaminski, Monique; Pierrat, Veronique; Marret, Stephane; Matis, Jacqueline; Ledesert, Bernard; Thiriez, Gerard; Fresson, Jeanne; Roze, Jean-Christophe; Zupan-Simunek, Veronique; Arnaud, Catherine; Burguet, Antoine; Larroque, Beatrice; Breart, Gerard; Ancel, Pierre-Yves

    2010-01-01

    Aim: The aim of this study was to assess the independent role of cerebral lesions on ultrasound scan, and several other neonatal and obstetric factors, as potential predictors of cerebral palsy (CP) in a large population-based cohort of very preterm infants. Method: As part of EPIPAGE, a population-based prospective cohort study, perinatal data…

  8. A dual-threshold radar detection system

    NASA Astrophysics Data System (ADS)

    Hammerle, K. J.

    It is known that the beam agility of a phased-array radar can be utilized to enhance target detection capability as compared to a radar which has the same power but which radiates its energy uniformly over the solid angle being surveilled. A dual-threshold approach for realizing this enhancement is examined. Quantitative results are presented parametrically for four signal fluctuation models. The study also identifies the optimum combination of dual-threshold design parameters for each target model under a wide range of imposed system constraints such as the allowed number of false alarms per beam position. It is shown that under certain imposed constraints, no enhancement is possible.

  9. Bowel Obstruction in Elderly Ovarian Cancer Patients: A Population-Based Study

    PubMed Central

    Mooney, Stephen J.; Winner, Megan; Hershman, Dawn L.; Wright, Jason D.; Feingold, Daniel L.; Allendorf, John D.; Neugut, Alfred I.

    2013-01-01

    PURPOSE Bowel obstruction is a common pre-terminal event in abdominal/pelvic cancer that has mainly been described in small single-institution studies. We used a large, population-based database to investigate the incidence, management, and outcomes of obstruction in ovarian cancer patients. PATIENTS AND METHODS We identified patients with stages IC-IV ovarian cancer, aged 65 years or older, in the Surveillance, Epidemiology and End Results (SEER)-Medicare database diagnosed between January 1, 1991 and December 31, 2005. We modeled predictors of inpatient hospitalization for bowel obstruction after cancer diagnosis, categorized management of obstruction, and analyzed the associations between treatment for obstruction and outcomes. RESULTS Of 8607 women with ovarian cancer, 1518 (17.6%) were hospitalized for obstruction subsequent to cancer diagnosis. Obstruction at cancer diagnosis (HR=2.17, 95% CI: 1.86–2.52) and mucinous tumor histology (HR=1.45, 95% CI: 1.15–1.83) were associated with increased risk of subsequent obstruction. Surgical management of obstruction was associated with lower 30-day mortality (13.4% in women managed surgically vs. 20.2% in women managed non-surgically), but equivalent survival after 30 days and equivalent rates of post-obstruction chemotherapy. Median post-obstruction survival was 382 days in women with obstructions of adhesive origin and 93 days in others. CONCLUSION In this large-scale, population-based assessment of patients with advanced ovarian cancer, nearly 20% of women developed bowel obstruction after cancer diagnosis. While obstruction due to adhesions did not signal the end of life, all other obstructions were pre-terminal events for the majority of patients regardless of treatment. PMID:23274561

  10. Hip Fracture in People with Erectile Dysfunction: A Nationwide Population-Based Cohort Study

    PubMed Central

    Wu, Chieh-Hsin; Tung, Yi-Ching; Lin, Tzu-Kang; Chai, Chee-Yin; Su, Yu-Feng; Tsai, Tai-Hsin; Tsai, Cheng-Yu; Lu, Ying-Yi; Lin, Chih-Lung

    2016-01-01

    The aims of this study were to investigate the risk of hip fracture and contributing factors in patients with erectile dysfunction(ED). This population-based study was performed using the Taiwan National Health Insurance Research Database. The analysis included4636 patients aged ≥ 40 years who had been diagnosed with ED (International Classification of Diseases, Ninth Revision, Clinical Modification codes 302.72, 607.84) during 1996–2010. The control group included 18,544 randomly selected age-matched patients without ED (1:4 ratio). The association between ED and hip fracture risk was estimated using a Cox proportional hazard regression model. During the follow-up period, 59 (1.27%) patients in the ED group and 140 (0.75%) patients in the non-ED group developed hip fracture. After adjusting for covariates, the overall incidence of hip fracture was 3.74-times higher in the ED group than in the non-ED group (2.03 vs. 0.50 per 1000 person-years, respectively). The difference in the overall incidence of hip fracture was largest during the 3-year follow-up period (hazard ratio = 7.85; 95% confidence interval = 2.94–20.96; P <0.0001). To the best of our knowledge, this nationwide population-based study is the first to investigate the relationship between ED and subsequent hip fracture in an Asian population. The results showed that ED patients had a higher risk of developing hip fracture. Patients with ED, particularly those aged 40–59 years, should undergo bone mineral density examinations as early as possible and should take measures to reduce the risk of falls. PMID:27078254

  11. Organization of population-based cancer control programs: Europe and the world.

    PubMed

    Otter, Renée; Qiao, You-Lin; Burton, Robert; Samiei, Massoud; Parkin, Max; Trapido, Edward; Weller, David; Magrath, Ian; Sutcliffe, Simon

    2009-01-01

    As cancer is to a large extent avoidable and treatable, a cancer control program should be able to reduce mortality and morbidity and improve the quality of life of cancer patients and their families. However, the extent to which the goals of a cancer control program can be achieved will depend on the resource constraints a country faces. Such population-based cancer control plans should prioritize effective interventions and programs that are beneficial to the largest part of the population, and should include activities devoted to prevention, screening and early detection, treatment, palliation and end-of-life care, and rehabilitation. In order to develop a successful cancer control program, leadership and the relevant stakeholders, including patient organizations, need to be identified early on in the process so that all partners can take ownership and responsibility for the program. Various tools have been developed to aid them in the planning and implementation process. However, countries developing a national cancer control program would benefit from a discussion of different models for planning and delivery of population-based cancer control in settings with differing levels of resource commitment, in order to determine how best to proceed given their current level of commitment, political engagement and resources. As the priority assigned to different components of cancer control will differ depending on available resources and the burden and pattern of cancer, it is important to consider the relative roles of prevention, early detection, diagnosis, treatment, rehabilitation and palliative care in a cancer control program, as well as how to align available resources to meet prioritized needs. Experiences from countries with differing levels of resources are presented and serve to illustrate the difficulties in developing and implementing cancer control programs, as well as the innovative strategies that are being used to maximize available resources and

  12. Preoperative risk score predicting 90-day mortality after liver resection in a population-based study.

    PubMed

    Chang, Chun-Ming; Yin, Wen-Yao; Su, Yu-Chieh; Wei, Chang-Kao; Lee, Cheng-Hung; Juang, Shiun-Yang; Chen, Yi-Ting; Chen, Jin-Cherng; Lee, Ching-Chih

    2014-09-01

    The impact of important preexisting comorbidities, such as liver and renal disease, on the outcome of liver resection remains unclear. Identification of patients at risk of mortality will aid in improving preoperative preparations. The purpose of this study is to develop and validate a population-based score based on available preoperative and predictable parameters predicting 90-day mortality after liver resection using data from a hepatitis endemic country.We identified 13,159 patients who underwent liver resection between 2002 and 2006 in the Taiwan National Health Insurance Research Database. In a randomly selected half of the total patients, multivariate logistic regression analysis was used to develop a prediction score for estimating the risk of 90-day mortality by patient demographics, preoperative liver disease and comorbidities, indication for surgery, and procedure type. The score was validated with the remaining half of the patients.Overall 90-day mortality was 3.9%. Predictive characteristics included in the model were age, preexisting cirrhosis-related complications, ischemic heart disease, heart failure, cerebrovascular disease, renal disease, malignancy, and procedure type. Four risk groups were stratified by mortality scores of 1.1%, 2.2%, 7.7%, and 15%. Preexisting renal disease and cirrhosis-related complications were the strongest predictors. The score discriminated well in both the derivation and validation sets with c-statistics of 0.75 and 0.75, respectively.This population-based score could identify patients at risk of 90-day mortality before liver resection. Preexisting renal disease and cirrhosis-related complications had the strongest influence on mortality. This score enables preoperative risk stratification, decision-making, quality assessment, and counseling for individual patients.

  13. Cost burden of type 2 diabetes in Germany: results from the population-based KORA studies

    PubMed Central

    Ulrich, Susanne; Holle, Rolf; Wacker, Margarethe; Stark, Renee; Icks, Andrea; Thorand, Barbara; Peters, Annette

    2016-01-01

    Objective To examine the impact of type 2 diabetes on direct and indirect costs and to describe the effect of relevant diabetes-related factors, such as type of treatment or glycaemic control on direct costs. Design Bottom-up excess cost analysis from a societal perspective based on population-based survey data. Participants 9160 observations from 6803 individuals aged 31–96 years (9.6% with type 2 diabetes) from the population-based KORA (Cooperative Health Research in the Region of Augsburg) studies in Southern Germany. Outcome measures Healthcare usage, productivity losses, and resulting direct and indirect costs. Methods Information on diabetes status, biomedical/sociodemographic variables, medical history and on healthcare usage and productivity losses was assessed in standardised interviews and examinations. Healthcare usage and productivity losses were costed with reference to unit prices and excess costs of type 2 diabetes were calculated using generalised linear models. Results Individuals with type 2 diabetes had 1.81 (95% CI 1.56 to 2.11) times higher direct (€3352 vs €1849) and 2.07 (1.51 to 2.84) times higher indirect (€4103 vs €1981) annual costs than those without diabetes. Cardiovascular complications, a long diabetes duration and treatment with insulin were significantly associated with increased direct costs; however, glycaemic control was only weakly insignificantly associated with costs. Conclusions This study illustrates the substantial direct and indirect societal cost burden of type 2 diabetes in Germany. Strong effort is needed to optimise care to avoid progression of the disease and costly complications. PMID:27872118

  14. A Nationwide Population-Based Cohort Study of Migraine and Organic-Psychogenic Erectile Dysfunction

    PubMed Central

    Wu, Szu-Hsien; Chuang, Eric; Chuang, Tien-Yow; Lin, Cheng-Li; Lin, Ming-Chia; Yen, Der-Jen; Kao, Chia-Hung

    2016-01-01

    Abstract As chronic illnesses and chronic pain are related to erectile dysfunction (ED), migraine as a prevalent chronic disorder affecting lots of people all over the world may negatively affect quality of life as well as sexual function. However, a large-scale population-based study of erectile dysfunction and other different comorbidities in patients with migraine is quite limited. This cohort longitudinal study aimed to estimate the association between migraine and ED using a nationwide population-based database in Taiwan. The data used for this cohort study were retrieved from the Longitudinal Health Insurance Database 2000 in Taiwan. We identified 5015 patients with migraine and frequency matched 20,060 controls without migraine from 2000 to 2011. The occurrence of ED was followed up until the end of 2011. We used Cox proportional hazard regression models to analyze the risks of ED. The overall incidence of ED was 1.78-fold greater in the migraine cohort than in the comparison cohort (23.3 vs 10.5 per 10,000 person-years; 95% confidence interval [CI] = 1.31–2.41). Furthermore, patients with migraine were 1.75-fold more likely to develop organic ED (95% CI = 1.27–2.41) than were the comparison cohort. The migraine patients with anxiety had a 3.6-fold higher HR of having been diagnosed with ED than the comparison cohort without anxiety (95% CI, 2.10–6.18). The results support that patients with migraine have a higher incidence of being diagnosed with ED, particularly in the patient with the comorbidity of anxiety. PMID:26962838

  15. Correlation between systemic lupus erythematosus and malignancies: a cross-sectional population-based study.

    PubMed

    Azrielant, Shir; Tiosano, Shmuel; Watad, Abdulla; Mahroum, Naim; Whitby, Aaron; Comaneshter, Doron; Cohen, Arnon D; Amital, Howard

    2017-01-14

    Autoimmune conditions reflect dysregulation of the immune system; this may be of clinical significance in the development of several malignancies. Previous studies show an association between systemic lupus erythematosus (SLE) and the development of malignancies; however, their investigations into the development of specific malignancies are inconsistent, and their external validity may be questionable. The main objective of this study is to investigate the association between the presence of SLE and various malignancies, in a large-scale population-based study. Data for this study was collected from Clalit Health Services, the largest state-mandated health service organization in Israel. All adult members diagnosed with SLE were included (n = 5018) and their age and sex-matched controls (n = 25,090), creating a cross-sectional population-based study. Medical records of all subjects were analyzed for documentation of malignancies. Logistic regression models were built separately for each malignant condition, controlling for age, gender, BMI, smoking, and socioeconomic status. Diagnosis of malignancy (of any type) was more prevalent in the SLE population (odds ratio [OR] 3.35, 95% confidence interval [CI] 3.02-3.72). SLE diagnosis was also found to be independently associated with higher proportions of non-Hodgkin lymphoma (OR 3.02, 95% CI 2.72-3.33), Hodgkin lymphoma (OR 2.43, 95% CI 1.88-2.99), multiple myeloma (OR 2.57, 95% CI 1.85-3.28), cervix uteri malignancies (OR 1.65, 95% CI 1.10-2.20), and genital organ malignancies (OR 2.32, 95% CI 1.42-3.22), after adjustment for confounding variables. The presence of an SLE diagnosis was found to be independently associated with higher proportions of malignancies, particularly hematologic malignancies. These findings should be considered while treating SLE patients, and possibly supplement their screening routine.

  16. Threshold pion photoproduction and chiral symmetry

    SciTech Connect

    Bernstein, A.M.; Guillian, E.

    1992-12-01

    Experiments on the {gamma}p{yields}{pi}{sup o}p threshold reaction (performed at Saclay and Mainz) have attracted considerable attention because they test low energy, QCD related, predictions. The latest analyses of these data have indicated that the threshold value for the (s wave) electric dipole amplitude (E{sub o+}) is in agreement with {open_quotes}low energy theorems{close_quotes} based on current algebra (PCAC). However there was a strong energy dependence for this amplitude which makes it problematical to compare theory and experiment at only one point, the x{sup o} threshold. All of the previous analyses made model dependent assumptions about the p wave multipoles. The authors have performed, for the first time, a model independent analysis of the total and differential cross section data. In agreement with their previous analysis, and with the PCAC prediction, they obtain a threshold value of E{sub o+}= (2.0 {plus_minus} 0.2) x 10{sup {minus}3}/m{sub {pi}}. However the slope of this amplitude does not vary rapidly with energy which makes the question of what energy to compare the threshold values with theory less of a problem. A comparison with theory and previous analyses will be presented.

  17. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    PubMed

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  18. Epidemic thresholds for bipartite networks

    NASA Astrophysics Data System (ADS)

    Hernández, D. G.; Risau-Gusman, S.

    2013-11-01

    It is well known that sexually transmitted diseases (STD) spread across a network of human sexual contacts. This network is most often bipartite, as most STD are transmitted between men and women. Even though network models in epidemiology have quite a long history now, there are few general results about bipartite networks. One of them is the simple dependence, predicted using the mean field approximation, between the epidemic threshold and the average and variance of the degree distribution of the network. Here we show that going beyond this approximation can lead to qualitatively different results that are supported by numerical simulations. One of the new features, that can be relevant for applications, is the existence of a critical value for the infectivity of each population, below which no epidemics can arise, regardless of the value of the infectivity of the other population.

  19. Increment Threshold Functions in Retinopathy of Prematurity

    PubMed Central

    Hansen, Ronald M.; Moskowitz, Anne; Bush, Jennifer N.; Fulton, Anne B.

    2016-01-01

    Purpose To assess scotopic background adaptation in subjects with a history of preterm birth and retinopathy of prematurity (ROP). Retinopathy of prematurity is known to have long-term effects on rod photoreceptor and rod mediated postreceptor retinal function. Methods Rod-mediated thresholds for detection of 3° diameter, 50 ms stimuli presented 20° from fixation were measured using a spatial forced choice method in 36 subjects (aged 9–17 years) with a history of preterm birth and 11 age similar term-born subjects. Thresholds were measured first in the dark-adapted condition and then in the presence of 6 steady background lights (−2.8 to +2.0 log scot td). A model of the increment threshold function was fit to each subject's thresholds to estimate the dark-adapted threshold (TDA) and the Eigengrau (A0, the background that elevates threshold 0.3 log unit above TDA). Results In subjects with a history of severe ROP, both TDA and A0 were significantly elevated relative to those in former preterms who never had ROP and term-born control subjects. Subjects who had mild ROP had normal TDA but elevated A0. Neither TDA nor A0 differed significantly between former preterms who never had ROP and term-born controls. Conclusions The results suggest that in severe ROP, threshold is affected at a preadaptation site, possibly the rod outer segment. In mild ROP, changes in the Eigengrau may reflect increased intrinsic noise in the photoreceptor or postreceptor circuitry or both. PMID:27145476

  20. Predictors of Colorectal Cancer Survival in Golestan, Iran: A Population-based Study

    PubMed Central

    Aryaie, Mohammad; Roshandel, Gholamreza; Semnani, Shahryar; Asadi-Lari, Mohsen; Aarabi, Mohsen; Vakili, Mohammad Ali; Kazemnejhad, Vahideh; Sedaghat, Seyed Mehdi

    2013-01-01

    OBJECTIVES We aimed to investigate factors associated with colorectal cancer survival in Golestan, Iran. METHODS We used a population based cancer registry to recruit study subjects. All patients registered since 2004 were contacted and data were collected using structured questionnaires and trained interviewers. All the existing evidences to determine the stage of the cancer were also collected. The time from first diagnosis to death was compared in patients according to their stage of cancer using the Kaplan-Meir method. A Cox proportional hazard model was built to examine their survival experience by taking into account other covariates. RESULTS Out of a total of 345 subjects, 227 were traced. Median age of the subjects was 54 and more than 42% were under 50 years old. We found 132 deaths among these patients, 5 of which were non-colorectal related deaths. The median survival time for the entire cohort was 3.56 years. A borderline significant difference in survival experience was detected for ethnicity (log rank test, p=0.053). Using Cox proportional hazard modeling, only cancer stage remained significantly associated with time of death in the final model. CONCLUSIONS Colorectal cancer occurs at a younger age among people living in Golestan province. A very young age at presentation and what appears to be a high proportion of patients presenting with late stage in this area suggest this population might benefit substantially from early diagnoses by introducing age adapted screening programs. PMID:23807907

  1. Probabilistic Threshold Criterion

    SciTech Connect

    Gresshoff, M; Hrousis, C A

    2010-03-09

    The Probabilistic Shock Threshold Criterion (PSTC) Project at LLNL develops phenomenological criteria for estimating safety or performance margin on high explosive (HE) initiation in the shock initiation regime, creating tools for safety assessment and design of initiation systems and HE trains in general. Until recently, there has been little foundation for probabilistic assessment of HE initiation scenarios. This work attempts to use probabilistic information that is available from both historic and ongoing tests to develop a basis for such assessment. Current PSTC approaches start with the functional form of the James Initiation Criterion as a backbone, and generalize to include varying areas of initiation and provide a probabilistic response based on test data for 1.8 g/cc (Ultrafine) 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) and LX-17 (92.5% TATB, 7.5% Kel-F 800 binder). Application of the PSTC methodology is presented investigating the safety and performance of a flying plate detonator and the margin of an Ultrafine TATB booster initiating LX-17.

  2. Upper threshold of extracellular neural stimulation

    PubMed Central

    Pangratz-Fuehrer, Susanne; Suh, Bongsoo; Mathieson, Keith; Naik, Natasha; Palanker, Daniel

    2012-01-01

    It is well known that spiking neurons can produce action potentials in response to extracellular stimulation above certain threshold. It is widely assumed that there is no upper limit to somatic stimulation, except for cellular or electrode damage. Here we demonstrate that there is an upper stimulation threshold, above which no action potential can be elicited, and it is below the threshold of cellular damage. Existence of this upper stimulation threshold was confirmed in retinal ganglion cells (RGCs) at pulse durations ranging from 5 to 500 μs. The ratio of the upper to lower stimulation thresholds varied typically from 1.7 to 7.6, depending on pulse duration. Computational modeling of extracellular RGC stimulation explained the upper limit by sodium current reversal on the depolarized side of the cell membrane. This was further confirmed by experiments in the medium with a low concentration of sodium. The limited width of the stimulation window may have important implications in design of the electro-neural interfaces, including neural prosthetics. PMID:22993266

  3. Cost-effectiveness thresholds: pros and cons.

    PubMed

    Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-12-01

    Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.

  4. Upper threshold of extracellular neural stimulation.

    PubMed

    Boinagrov, David; Pangratz-Fuehrer, Susanne; Suh, Bongsoo; Mathieson, Keith; Naik, Natasha; Palanker, Daniel

    2012-12-01

    It is well known that spiking neurons can produce action potentials in response to extracellular stimulation above certain threshold. It is widely assumed that there is no upper limit to somatic stimulation, except for cellular or electrode damage. Here we demonstrate that there is an upper stimulation threshold, above which no action potential can be elicited, and it is below the threshold of cellular damage. Existence of this upper stimulation threshold was confirmed in retinal ganglion cells (RGCs) at pulse durations ranging from 5 to 500 μs. The ratio of the upper to lower stimulation thresholds varied typically from 1.7 to 7.6, depending on pulse duration. Computational modeling of extracellular RGC stimulation explained the upper limit by sodium current reversal on the depolarized side of the cell membrane. This was further confirmed by experiments in the medium with a low concentration of sodium. The limited width of the stimulation window may have important implications in design of the electro-neural interfaces, including neural prosthetics.

  5. Threshold concepts in finance: student perspectives

    NASA Astrophysics Data System (ADS)

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-10-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by finance academics. In addition, we investigate the potential of a framework of different types of knowledge to differentiate the delivery of the finance curriculum and the role of modelling in finance. Our purpose is to identify ways to improve curriculum design and delivery, leading to better student outcomes. Whilst we find that there is significant overlap between what students identify as important in finance and the threshold concepts identified by academics, much of this overlap is expressed by indirect reference to the concepts. Further, whilst different types of knowledge are apparent in the student data, there is evidence that students do not necessarily distinguish conceptual from other types of knowledge. As well as investigating the finance curriculum, the research demonstrates the use of threshold concepts to compare and contrast student and academic perceptions of a discipline and, as such, is of interest to researchers in education and other disciplines.

  6. Threshold phenomena in soft matter

    NASA Astrophysics Data System (ADS)

    Huang, Zhibin

    Although two different fields are covered, this thesis is mainly focused on some threshold behaviors in both liquid crystal field and fluid dynamic systems. A method of rubbed polyimide is used to obtain pretilt. Sufficiently strong rubbing of a polyimide (SE-1211) results in a large polar pretilt of liquid crystal director with respect to the homeotropic orientation. There exists a threshold rubbing strength required to induce nonzero pretilt. For the homologous liquid crystal series alkyl-cyanobyphenyl, we found that the threshold rubbing strength is a monotonic function of the number of methylene units. A dual easy axis model is then used to explain the results. Freedericksz transition measurements have been used to determine the quadratical and quartic coefficients associated with the molecules' tilt with respect to the layer normal in surface-induced smectic layers in the nematic phase above the smectic-A-nematic phase transition temperature. Both the quadratic and quartic coefficients are consistent with the scaling relationship as predicted in theory, and their ratio is approximately constant. A Rayleigh-Taylor instability experiment is performed by using a magnetic field gradient to draw down a low density but highly paramagnetic fluid below a more dense fluid in a Hele-Shaw cell. When turning off the magnetic field, the RT instability occurs in situ and the growth of the most unstable wavevector is measured as a function of time. The wavelength of the RT instability along with the growth rate was measured as a function of capillary number (which is related to the density difference and interfacial tension between two fluids). A theory for the instability that permits different viscosities for two immiscible fluids was developed, and good agreement was found with the experimental results. The technique of magnetic levitation promises to broaden significantly the accessible parameter space of gravitational interfacial instability experiments. A method is

  7. Differential associations of plasma lipids with incident dementia and dementia subtypes in the 3C Study: A longitudinal, population-based prospective cohort study

    PubMed Central

    Schilling, Sabrina; Tzourio, Christophe; Soumaré, Aïcha; Kaffashian, Sara; Dartigues, Jean-François; Ancelin, Marie-Laure; Dufouil, Carole; Debette, Stéphanie

    2017-01-01

    Background Vascular risk factors have been proposed as important targets for the prevention of dementia. As lipid fractions represent easily modifiable targets, we examined the longitudinal relationship of baseline lipid fractions with 13-y incident dementia and its subtypes (Alzheimer disease [AD] and mixed or vascular dementia) in older community-dwelling persons. Methods and findings Non-institutionalized persons aged 65+ y (n = 9,294) were recruited for the Three-City Study (3C Study), a population-based cohort study from the electoral rolls of the cities of Dijon, Bordeaux, and Montpellier, France, between March 1999 and March 2001. Follow-up examinations were performed every 2 y after the baseline assessment. The final study sample comprised 7,470 participants from the 3C Study (mean age ± standard deviation [SD] 73.8 ± 5.3 y, 61.0% women) who were prospectively followed up for up to 13 y. Fasting lipid fractions (triglycerides [TGs], high-density lipoprotein cholesterol [HDL-C], low-density lipoprotein cholesterol [LDL-C], total cholesterol [TC]) were studied as continuous variables, and results are reported per SD increase of each lipid fraction. Incident dementia and its subtypes were studied as censored variables using Cox models with age as time scale. Analyses were adjusted for sex, study center, and educational level, as well as vascular risk factors and apolipoprotein E (APOE) ε4 genotype. We corrected for multiple testing, yielding a significance threshold of 0.0169. p-Values above the significance threshold but less than 0.05 were considered nominally significant. During a mean (± SD) follow-up period of 7.9 ± 3.6 y, 779 participants developed incident dementia (n = 532 AD and n = 154 mixed or vascular dementia). Higher LDL-C and TC concentrations at baseline were associated with an increased risk of AD (hazard ratio [HR] per SD increase = 1.13 [95% CI 1.04–1.22], p = 0.0045, and HR = 1.12 [1.03–1.22], p = 0.0072, respectively). These

  8. Brown Norway rat asthma model of diphenylmethane-4,4'-diisocyanate (MDI): determination of the elicitation threshold concentration of after inhalation sensitization.

    PubMed

    Pauluhn, Jürgen; Poole, Alan

    2011-03-15

    Occupational exposure to polymeric diphenylmethane-diisocyanate (MDI), a known human asthmagen, can be attributed to two potential routes: the skin and the respiratory tract. While the skin as the route of sensitization was the focus of a previous investigation (Pauluhn, 2008), this paper describes a modified sensitization protocol using a 5-day inhalation exposure (days 0-4) of Brown Norway (BN) rats to two concentration x exposure time (C x t) relationships of 1000, 5000, and 10,000 mg MDI/m³ x min at exposure durations of either 10 or 360-min. Apart from the differences in the induction protocol, all other experimental variables remained identical. This was followed by four 30-min inhalation challenges to 40 mg MDI/m³ on target days 20, 25, 50, and 65. After the last challenge, changes in breathing patterns delayed in onset were recorded and allergic lung inflammation was probed by bronchoalveolar lavage (BAL). In a subsequent study groups of rats were sensitized using the 10-min C x t protocol and challenged 3-times at 40 mg MDI/m³. At the fourth challenge a dose-escalation regimen was used to determine the elicitation threshold on 'asthmatic' rats. Consistent with the skin-sensitization protocol, the most sensitive endpoints characterizing an allergic pulmonary inflammation were again BAL-neutrophils and physiological measurements showing respiratory changes delayed in onset. The dose-escalation challenge yielded an elicitation threshold of 5 mg MDI-aerosol/m³ at 30 min challenge duration. In topically sensitized rats this threshold was estimated to be 3mg/m³. In summary, these data suggest the C x t product of MDI-aerosol that triggers an elicitation response in 'asthmatic' rats is slightly below of that causing acute pulmonary irritation in naïve rats. The high concentration delivered to the respiratory tract during the 10-min exposure period elicited a more vigorous response than the similar C x t at 360 min. Therefore, short high-level exposure

  9. Combined analysis of near-threshold production of ω and φ mesons in nucleon-nucleon collisions within an effective meson-nucleon model

    NASA Astrophysics Data System (ADS)

    Kaptari, L. P.; Kämpfer, B.

    2005-02-01

    Vector meson ( V = ω,φ) production in near-threshold elementary nucleon-nucleon collisions pp↦ppV, pn↦pnV and pn↦dV is studied within an effective meson-nucleon theory. It is shown that a set of effective parameters can be established to describe fairly well the available experimental data of angular distributions and the energy dependence of the total cross-sections without explicit implementation of the Okubo-Zweig-Iizuka rule violation. Isospin effects are considered in detail and compared with experimental data whenever available.

  10. Passive-aggressive (negativistic) personality disorder: a population-based twin study.

    PubMed

    Czajkowski, Nikolai; Kendler, Kenneth S; Jacobson, Kristen C; Tambs, Kristian; Røysamb, Espen; Reichborn-Kjennerud, Ted

    2008-02-01

    The objective of this study was to investigate the familial aggregation of passive aggressive personality disorder (PAPD), and explore issues regarding PAPD raised by the DSM-IV Personality Disorder Work Group. Two thousand seven hundred and ninety-four Norwegian twins from the population-based Norwegian Institute of Public Health Twin Panel were interviewed with the Structured Interview for DSM-IV Personality (SIDP-IV). Because of the rarity of the twins meeting full diagnostic criteria for PAPD a dimensional representation of the disorder was used for the analyses. Overlap with other axis II disorders was assessed by polychoric correlations, while familial aggregation was explored by structural equation twin models. Overlap was highest with paranoid (r = 0.52) and borderline personality disorder (r = 0.53), and lowest with schizoid (r = 0.26). Significant familial aggregation was found for PAPD. The twin correlations and parameter estimates in the full model indicated genetic and shared environmental effects for females, and only shared environmental effects for males, but the prevalence of endorsed PAPD criteria in this community sample was too low to permit us to conclude with confidence regarding the relative influence of genetic and shared environmental factors on the familial aggregation of PAPD.

  11. Associations of childhood eczema severity: A US population based study

    PubMed Central

    Silverberg, Jonathan I.; Simpson, Eric L.

    2014-01-01

    Little is known about predictors of eczema severity in the US population. We sought to determine the distribution and associations of childhood eczema severity in the US. We analyzed data from the 2007 National Survey of Children's Health, a prospective questionnaire-based study of a nationally representative sample of 91,642 children (0-17yr). The prevalence of childhood eczema was 12.97% (95% confidence interval [95% CI]=12.42–13.53); 67.0% (95% CI: 64.8–69.2) had mild, 26.0% (95% CI: 23.9–28.1) moderate and 7.0% (95% CI: 5.8–8.3) severe disease. There was significant statewide-variation of the distribution of eczema severity (Rao-Scott chi square, P=0.004), with highest rates of severe disease in Northeastern and Midwestern states. In univariate models, eczema severity was increased with older age, African-American and Hispanic race/ethnicity, lower household income, oldest child in the family, home with a single mother, lower paternal/maternal education level, maternal general health, maternal/paternal emotional health, dilapidated housing and garbage on the streets. In multivariate survey logistic regression models using stepwise and backward selection, moderate–severe eczema was associated with older age, lower household income and fair or poor maternal health, but inversely associated with birthplace outside the US. These data indicate that environmental and/or lifestyle factors play an important role in eczema severity. PMID:24819283

  12. Propranolol Reduces Cancer Risk: A Population-Based Cohort Study.

    PubMed

    Chang, Ping-Ying; Huang, Wen-Yen; Lin, Cheng-Li; Huang, Tzu-Chuan; Wu, Yi-Ying; Chen, Jia-Hong; Kao, Chia-Hung

    2015-07-01

    β-Blockers have been reported to exhibit potential anticancer effects in cancer cell lines and animal models. However, clinical studies have yielded inconsistent results regarding cancer outcomes and cancer risk when β-blockers were used. This study investigated the association between propranolol and cancer risk.Between January 1, 2000 and December 31, 2011, a patient cohort was extracted from the Longitudinal Health Insurance Database 2000, a subset of the Taiwan National Health Insurance Research Database. A propranolol cohort (propranolol usage >6 months) and nonpropranolol cohort were matched using a propensity score. Cox proportional hazard models were used to estimate the hazard ratio (HR) and 95% confidence intervals (CIs) of cancer associated with propranolol treatment.The study sample comprised 24,238 patients. After a 12-year follow-up period, the cumulative incidence for developing cancer was low in the propranolol cohort (HR: 0.75; 95% CI: 0.67-0.85; P < 0.001). Patients with propranolol treatment exhibited significantly lower risks of cancers in head and neck (HR: 0.58; 95% CI: 0.35-0.95), esophagus (HR: 0.35; 95% CI: 0.13-0.96), stomach (HR: 0.54; 95% CI: 0.30-0.98), colon (HR: 0.68; 95% CI: 0.49-0.93), and prostate cancers (HR: 0.52; 95% CI: 0.33-0.83). The protective effect of propranolol for head and neck, stomach, colon, and prostate cancers was most substantial when exposure duration exceeded 1000 days.This study supports the proposition that propranolol can reduce the risk of head and neck, esophagus, stomach, colon, and prostate cancers. Further prospective study is necessary to confirm these findings.

  13. Life below the threshold.

    PubMed

    Castro, C

    1991-01-01

    This article explains that malnutrition, poor health, and limited educational opportunities plague Philippine children -- especially female children -- from families living below the poverty threshold. Nearly 70% of households in the Philippines do not meet the required daily level of nutritional intake. Because it is often -- and incorrectly -- assumed that women's nutritional requirements are lower than men's, women suffer higher rates of malnutrition and poor health. A 1987 study revealed that 11.7% of all elementary students were underweight and 13.9% had stunted growths. Among elementary-school girls, 17% were malnourished and 40% suffered from anemia (among lactating mothers, more than 1/2 are anemic). A 1988 Program for Decentralized Educational Development study showed that grade VI students learn only about 1/2 of what they are supposed to learn. 30% of the children enrolled in grade school drop out before they reach their senior year. The Department of Education, Culture and Sports estimates that some 2.56 million students dropped out of school in l989. That same year, some 3.7 million children were counted as part of the labor force. In Manila alone, some 60,000 children work the streets, whether doing odd jobs or begging, or turning to crime or prostitution. the article tells the story of a 12 year-old girl named Ging, a 4th grader at a public school and the oldest child in a poor family of 6 children. The undernourished Ging dreams of a good future for her family and sees education as a way out of poverty; unfortunately, her time after school is spend working in the streets or looking after her family. She considers herself luckier than many of the other children working in the streets, since she at least has a family.

  14. Outcome-Driven Thresholds for Home Blood Pressure Measurement

    PubMed Central

    Niiranen, Teemu J.; Asayama, Kei; Thijs, Lutgarde; Johansson, Jouni K.; Ohkubo, Takayoshi; Kikuya, Masahiro; Boggia, José; Hozawa, Atsushi; Sandoya, Edgardo; Stergiou, George S.; Tsuji, Ichiro; Jula, Antti M.; Imai, Yutaka; Staessen, Jan A.

    2013-01-01

    The lack of outcome-driven operational thresholds limits the clinical application of home blood pressure (BP) measurement. Our objective was to determine an outcome-driven reference frame for home BP measurement. We measured home and clinic BP in 6470 participants (mean age, 59.3 years; 56.9% women; 22.4% on antihypertensive treatment) recruited in Ohasama, Japan (n=2520); Montevideo, Uruguay (n=399); Tsurugaya, Japan (n=811); Didima, Greece (n=665); and nationwide in Finland (n=2075). In multivariable-adjusted analyses of individual subject data, we determined home BP thresholds, which yielded 10-year cardiovascular risks similar to those associated with stages 1 (120/80 mm Hg) and 2 (130/85 mm Hg) prehypertension, and stages 1 (140/90 mm Hg) and 2 (160/100 mm Hg) hypertension on clinic measurement. During 8.3 years of follow-up (median), 716 cardiovascular end points, 294 cardiovascular deaths, 393 strokes, and 336 cardiac events occurred in the whole cohort; in untreated participants these numbers were 414, 158, 225, and 194, respectively. In the whole cohort, outcome-driven systolic/diastolic thresholds for the home BP corresponding with stages 1 and 2 prehypertension and stages 1 and 2 hypertension were 121.4/77.7, 127.4/79.9, 133.4/82.2, and 145.4/86.8 mm Hg; in 5018 untreated participants, these thresholds were 118.5/76.9, 125.2/79.7, 131.9/82.4, and 145.3/87.9 mm Hg, respectively. Rounded thresholds for stages 1 and 2 prehypertension and stages 1 and 2 hypertension amounted to 120/75, 125/80, 130/85, and 145/90 mm Hg, respectively. Population-based outcome-driven thresholds for home BP are slightly lower than those currently proposed in hypertension guidelines. Our current findings could inform guidelines and help clinicians in diagnosing and managing patients. PMID:23129700

  15. Threshold Hypothesis: Fact or Artifact?

    ERIC Educational Resources Information Center

    Karwowski, Maciej; Gralewski, Jacek

    2013-01-01

    The threshold hypothesis (TH) assumes the existence of complex relations between creative abilities and intelligence: linear associations below 120 points of IQ and weaker or lack of associations above the threshold. However, diverse results have been obtained over the last six decades--some confirmed the hypothesis and some rejected it. In this…

  16. The Nature of Psychological Thresholds

    ERIC Educational Resources Information Center

    Rouder, Jeffrey N.; Morey, Richard D.

    2009-01-01

    Following G. T. Fechner (1966), thresholds have been conceptualized as the amount of intensity needed to transition between mental states, such as between a states of unconsciousness and consciousness. With the advent of the theory of signal detection, however, discrete-state theory and the corresponding notion of threshold have been discounted.…

  17. Health Literacy in Taiwan: A Population-Based Study.

    PubMed

    Duong, Van Tuyen; Lin, I-Feng; Sorensen, Kristine; Pelikan, Jürgen M; Van Den Broucke, Stephan; Lin, Ying-Chin; Chang, Peter Wushou

    2015-11-01

    Data on health literacy (HL) in the population is limited for Asian countries. This study aimed to test the validity of the Mandarin version of the European Health Literacy Survey Questionnaire (HLS-EU-Q) for use in the general public in Taiwan. Multistage stratification random sampling resulted in a sample of 2989 people aged 15 years and above. The HLS-EU-Q was validated by confirmatory factor analysis with excellent model data fit indices. The general HL of the Taiwanese population was 34.4 ± 6.6 on a scale of 50. Multivariate regression analysis showed that higher general HL is significantly associated with the higher ability to pay for medication, higher self-perceived social status, higher frequency of watching health-related TV, and community involvement but associated with younger age. HL is also associated with health status, health behaviors, and health care accessibility and use. The HLS-EU-Q was found to be a useful tool to assess HL and its associated factors in the general population.

  18. A Population Based Twin Study of DSM-5 Maladaptive Personality Domains.

    PubMed

    South, Susan C; Krueger, Robert F; Knudsen, Gun Peggy; Ystrom, Eivind; Czajkowski, Nikolai; Aggen, Steven H; Neale, Michael C; Gillespie, Nathan A; Kendler, Kenneth S; Reichborn-Kjennerud, Ted

    2016-10-31

    Personality disorders (PDs) can be partly captured by dimensional traits, a viewpoint reflected in the most recent Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition (DSM-5) Alternative (Section III) Model for PD classification. The current study adds to the literature on the Alternative Model by examining the magnitude of genetic and environmental influences on 6 domains of maladaptive personality: negative emotionality, detachment, antagonism, disinhibition, compulsivity, and psychoticism. In a large, population-based sample (N = 2,293) of Norwegian male and female twin pairs, we investigated (a) if the domains demonstrated measurement invariance across gender at the phenotypic level, meaning that the relationships between the items and the latent factor were equivalent in men and women; and (b) if genetic and environmental influences on variation in these domains were equivalent across gender. Multiple group confirmatory factor modeling provided evidence that all 6 domain scale measurement models were gender-invariant. The best fitting biometric model for 4 of the 6 domains (negative emotionality, detachment, disinhibition, and compulsivity) was one in which genetic and environmental influences could be set invariant across gender. Evidence for sex differences in psychoticism was mixed, but the only clear evidence for quantitative sex differences was for the antagonism scale, with greater genetic influences found for men than women. Genetic influences across domains were moderate overall (19-37%), in line with previous research using symptom-based measures of PDs. This study adds to the very limited knowledge currently existing on the etiology of maladaptive personality traits. (PsycINFO Database Record

  19. Public assistance, drug testing, and the law: the limits of population-based legal analysis.

    PubMed

    Player, Candice T

    2014-01-01

    In Populations, Public Health and the Law, legal scholar Wendy Parmet urges courts to embrace population-based legal analysis, a public health inspired approach to legal reasoning. Parmet contends that population-based legal analysis offers a way to analyze legal issues--not unlike law and economics--as well as a set of values from which to critique contemporary legal discourse. Population-based analysis has been warmly embraced by the health law community as a bold new way of analyzing legal issues. Still, population-based analysis is not without its problems. At times, Parmet claims too much territory for the population perspective. Moreover, Parmet urges courts to recognize population health as an important norm in legal reasoning. What should we do when the insights of public health and conventional legal reasoning conflict? Still in its infancy, population-based analysis offers little in the way of answers to these questions. This Article applies population-based legal analysis to the constitutional problems that arise when states condition public assistance benefits on passing a drug test, thereby highlighting the strengths of the population perspective and exposing its weaknesses.

  20. Thresholds for Cenozoic bipolar glaciation.

    PubMed

    Deconto, Robert M; Pollard, David; Wilson, Paul A; Pälike, Heiko; Lear, Caroline H; Pagani, Mark

    2008-10-02

    The long-standing view of Earth's Cenozoic glacial history calls for the first continental-scale glaciation of Antarctica in the earliest Oligocene epoch ( approximately 33.6 million years ago), followed by the onset of northern-hemispheric glacial cycles in the late Pliocene epoch, about 31 million years later. The pivotal early Oligocene event is characterized by a rapid shift of 1.5 parts per thousand in deep-sea benthic oxygen-isotope values (Oi-1) within a few hundred thousand years, reflecting a combination of terrestrial ice growth and deep-sea cooling. The apparent absence of contemporaneous cooling in deep-sea Mg/Ca records, however, has been argued to reflect the growth of more ice than can be accommodated on Antarctica; this, combined with new evidence of continental cooling and ice-rafted debris in the Northern Hemisphere during this period, raises the possibility that Oi-1 represents a precursory bipolar glaciation. Here we test this hypothesis using an isotope-capable global climate/ice-sheet model that accommodates both the long-term decline of Cenozoic atmospheric CO(2) levels and the effects of orbital forcing. We show that the CO(2) threshold below which glaciation occurs in the Northern Hemisphere ( approximately 280 p.p.m.v.) is much lower than that for Antarctica ( approximately 750 p.p.m.v.). Therefore, the growth of ice sheets in the Northern Hemisphere immediately following Antarctic glaciation would have required rapid CO(2) drawdown within the Oi-1 timeframe, to levels lower than those estimated by geochemical proxies and carbon-cycle models. Instead of bipolar glaciation, we find that Oi-1 is best explained by Antarctic glaciation alone, combined with deep-sea cooling of up to 4 degrees C and Antarctic ice that is less isotopically depleted (-30 to -35 per thousand) than previously suggested. Proxy CO(2) estimates remain above our model's northern-hemispheric glaciation threshold of approximately 280 p.p.m.v. until approximately 25 Myr

  1. Raising the mode instability thresholds of fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Smith, Arlee V.; Smith, Jesse J.

    2014-03-01

    We use our numerical model of mode instability to analyze the influences of spontaneous thermal Rayleigh scattering (sTRS) and laser gain saturation on instability threshold powers. sTRS is stronger than the quantum noise used as the seed power for stimulated thermal Rayleigh scattering in previous studies, so the threshold is reduced by 15-25% with sTRS seeding. Gain saturation is strong in any efficient amplifier and we show how it can be exploited to raise instability thresholds be a factor of two or more while staying below the stimulated Brillouin threshold.

  2. Assessing the Validity of a Stage Measure on Physical Activity in a Population-Based Sample of Individuals with Type 1 or Type 2 Diabetes

    ERIC Educational Resources Information Center

    Plotnikoff, Ronald C.; Lippke, Sonia; Reinbold-Matthews, Melissa; Courneya, Kerry S.; Karunamuni, Nandini; Sigal, Ronald J.; Birkett, Nicholas

    2007-01-01

    This study was designed to test the validity of a transtheoretical model's physical activity (PA) stage measure with intention and different intensities of behavior in a large population-based sample of adults living with diabetes (Type 1 diabetes, n = 697; Type 2 diabetes, n = 1,614) and examine different age groups. The overall…

  3. Factors Affecting Perceptual Threshold in Argus II Retinal Prosthesis Subjects

    PubMed Central

    Ahuja, A. K.; Yeoh, J.; Dorn, J. D.; Caspi, A.; Wuyyuru, V.; McMahon, M. J.; Humayun, M. S.; Greenberg, R. J.; daCruz, L.

    2013-01-01

    Purpose The Argus II epiretinal prosthesis has been developed to provide partial restoration of vision to subjects blinded from outer retinal degenerative disease. Participants were surgically implanted with the system in the United States and Europe in a single arm, prospective, multicenter clinical trial. The purpose of this investigation was to determine which factors affect electrical thresholds in order to inform surgical placement of the device. Methods Electrode–retina and electrode–fovea distances were determined using SD-OCT and fundus photography, respectively. Perceptual threshold to electrical stimulation of electrodes was measured using custom developed software, in which current amplitude was varied until the threshold was found. Full field stimulus light threshold was measured using the Espion D-FST test. Relationships between electrical threshold and these three explanatory variables (electrode–retina distance, electrode–fovea distance, and monocular light threshold) were quantified using regression. Results Regression analysis showed a significant correlation between electrical threshold and electrode–retina distance (R2 = 0.50, P = 0.0002; n = 703 electrodes). 90.3% of electrodes in contact with the macula (n = 207) elicited percepts at charge densities less than 1 mC/cm2/phase. These threshold data also correlated well with ganglion cell density profile (P = 0.03). A weaker, but still significant, inverse correlation was found between light threshold and electrical threshold (R2 < 0.52, P = 0.01). Multivariate modeling indicated that electrode–retina distance and light threshold are highly predictive of electrode threshold (R2 = 0.87; P < 0.0005). Conclusions Taken together, these results suggest that while light threshold should be used to inform patient selection, macular contact of the array is paramount. Translational Relevance Reported Argus II clinical study results are in good agreement with prior in vitro and in vivo studies

  4. Energy Switching Threshold for Climatic Benefits

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Cao, L.; Caldeira, K.

    2013-12-01

    Climate change is one of the great challenges facing humanity currently and in the future. Its most severe impacts may still be avoided if efforts are made to transform current energy systems (1). A transition from the global system of high Greenhouse Gas (GHG) emission electricity generation to low GHG emission energy technologies is required to mitigate climate change (2). Natural gas is increasingly seen as a choice for transitions to renewable sources. However, recent researches in energy and climate puzzled about the climate implications of relying more energy on natural gas. On one hand, a shift to natural gas is promoted as climate mitigation because it has lower carbon per unit energy than coal (3). On the other hand, the effect of switching to natural gas on nuclear-power and other renewable energies development may offset benefits from fuel-switching (4). Cheap natural gas is causing both coal plants and nuclear plants to close in the US. The objective of this study is to measure and evaluate the threshold of energy switching for climatic benefits. We hypothesized that the threshold ratio of energy switching for climatic benefits is related to GHGs emission factors of energy technologies, but the relation is not linear. A model was developed to study the fuel switching threshold for greenhouse gas emission reduction, and transition from coal and nuclear electricity generation to natural gas electricity generation was analyzed as a case study. The results showed that: (i) the threshold ratio of multi-energy switching for climatic benefits changes with GHGs emission factors of energy technologies. (ii)The mathematical relation between the threshold ratio of energy switching and GHGs emission factors of energies is a curved surface function. (iii) The analysis of energy switching threshold for climatic benefits can be used for energy and climate policy decision support.

  5. Ambient Fine Particulate Matter and Mortality among Survivors of Myocardial Infarction: Population-Based Cohort Study

    PubMed Central

    Chen, Hong; Burnett, Richard T.; Copes, Ray; Kwong, Jeffrey C.; Villeneuve, Paul J.; Goldberg, Mark S.; Brook, Robert D.; van Donkelaar, Aaron; Jerrett, Michael; Martin, Randall V.; Brook, Jeffrey R.; Kopp, Alexander; Tu, Jack V.

    2016-01-01

    Background: Survivors of acute myocardial infarction (AMI) are at increased risk of dying within several hours to days following exposure to elevated levels of ambient air pollution. Little is known, however, about the influence of long-term (months to years) air pollution exposure on survival after AMI. Objective: We conducted a population-based cohort study to determine the impact of long-term exposure to fine particulate matter ≤ 2.5 μm in diameter (PM2.5) on post-AMI survival. Methods: We assembled a cohort of 8,873 AMI patients who were admitted to 1 of 86 hospital corporations across Ontario, Canada in 1999–2001. Mortality follow-up for this cohort extended through 2011. Cumulative time-weighted exposures to PM2.5 were derived from satellite observations based on participants’ annual residences during follow-up. We used standard and multilevel spatial random-effects Cox proportional hazards models and adjusted for potential confounders. Results: Between 1999 and 2011, we identified 4,016 nonaccidental deaths, of which 2,147 were from any cardiovascular disease, 1,650 from ischemic heart disease, and 675 from AMI. For each 10-μg/m3 increase in PM2.5, the adjusted hazard ratio (HR10) of nonaccidental mortality was 1.22 [95% confidence interval (CI): 1.03, 1.45]. The association with PM2.5 was robust to sensitivity analyses and appeared stronger for cardiovascular-related mortality: ischemic heart (HR10 = 1.43; 95% CI: 1.12, 1.83) and AMI (HR10 = 1.64; 95% CI: 1.13, 2.40). We estimated that 12.4% of nonaccidental deaths (or 497 deaths) could have been averted if the lowest measured concentration in an urban area (4 μg/m3) had been achieved at all locations over the course of the study. Conclusions: Long-term air pollution exposure adversely affects the survival of AMI patients. Citation: Chen H, Burnett RT, Copes R, Kwong JC, Villeneuve PJ, Goldberg MS, Brook RD, van Donkelaar A, Jerrett M, Martin RV, Brook JR, Kopp A, Tu JV. 2016. Ambient fine

  6. Public Verifiable Multi-sender Identity Based Threshold Signcryption

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Lei, Feiyu; Guo, Fang; Chen, Guang

    In this paper, we present a new identity based signcryption scheme with public verifiability using quadratic residue and pairings over elliptic curves, and give a security proof about the original scheme in the random oracle model. Furthermore, this paper focuses on a multisender(t,n) identity based threshold sighcryption. Finally, we prove the scheme of threshold setting is secure as the original scheme.

  7. Physical Trauma and Amyotrophic Lateral Sclerosis: A Population-Based Study Using Danish National Registries

    PubMed Central

    Seals, Ryan M.; Hansen, Johnni; Gredal, Ole; Weisskopf, Marc G.

    2016-01-01

    Prior studies have suggested that physical trauma might be associated with the development of amyotrophic lateral sclerosis (ALS). We conducted a population-based, individually matched case-control study in Denmark to assess whether hospitalization for trauma is associated with a higher risk of developing ALS. There were 3,650 incident cases of ALS in the Danish National Patient Register from 1982 to 2009. We used risk-set sampling to match each case to 100 age- and sex-matched population controls alive on the date of the case's diagnosis. Odds ratios and 95% confidence intervals were calculated using a conditional logistic regression model. History of trauma diagnosis was also obtained from the Danish Patient Register. When traumas in the 5 years prior to the index date were excluded, there was a borderline association between any trauma and ALS (odds ratio (OR) = 1.09, 95% confidence interval (CI): 0.99, 1.19). A first trauma before age 55 years was associated with ALS (OR = 1.22, 95% CI: 1.08, 1.37), whereas first traumas at older ages were not (OR = 0.97, 95% CI: 0.85, 1.10). Our data suggest that physical trauma at earlier ages is associated with ALS risk. Age at first trauma could help explain discrepancies in results of past studies of trauma and ALS. PMID:26825926

  8. Cognitive functioning in children with internalising, externalising and dysregulation problems: a population-based study.

    PubMed

    Blanken, Laura M E; White, Tonya; Mous, Sabine E; Basten, Maartje; Muetzel, Ryan L; Jaddoe, Vincent W V; Wals, Marjolein; van der Ende, Jan; Verhulst, Frank C; Tiemeier, Henning

    2017-04-01

    Psychiatric symptoms in childhood are closely related to neurocognitive deficits. However, it is unclear whether internalising and externalising symptoms are associated with general or distinct cognitive problems. We examined the relation between different types of psychiatric symptoms and neurocognitive functioning in a population-based sample of 1177 school-aged children. Internalising and externalising behaviour was studied both continuously and categorically. For continuous, variable-centred analyses, broadband scores of internalising and externalising symptoms were used. However, these measures are strongly correlated, which may prevent identification of distinct cognitive patterns. To distinguish groups of children with relatively homogeneous symptom patterns, a latent profile analysis of symptoms at age 6 yielded four exclusive groups of children: a class of children with predominantly internalising symptoms, a class with externalising symptoms, a class with co-occurring internalising and externalising symptoms, that resembles the CBCL dysregulation profile and a class with no problems. Five domains of neurocognitive ability were tested: attention/executive functioning, language, memory and learning, sensorimotor functioning, and visuospatial processing. Consistently, these two different modelling approaches demonstrated that children with internalising and externalising symptoms show distinct cognitive profiles. Children with more externalising symptoms performed lower in the attention/executive functioning domain, while children with more internalising symptoms showed impairment in verbal fluency and memory. In the most severely affected class of children with internalising and externalising symptoms, we found specific impairment in the sensorimotor domain. This study illustrates the specific interrelation of internalising and externalising symptoms and cognition in young children.

  9. Firearm and Nonfirearm Homicide in 5 South African Cities: A Retrospective Population-Based Study

    PubMed Central

    Thompson, Mary Lou; Myers, Jonathan E.

    2014-01-01

    Objective. We assessed the effectiveness of South Africa’s Firearm Control Act (FCA), passed in 2000, on firearm homicide rates compared with rates of nonfirearm homicide across 5 South African cities from 2001 to 2005. Methods. We conducted a retrospective population-based study of 37 067 firearm and nonfirearm homicide cases. Generalized linear models helped estimate and compare time trends of firearm and nonfirearm homicides, adjusting for age, sex, race, day of week, city, year of death, and population size. Results. There was a statistically significant decreasing trend regarding firearm homicides from 2001, with an adjusted year-on-year homicide rate ratio of 0.864 (95% confidence interval [CI] = 0.848, 0.880), representing a decrease of 13.6% per annum. The year-on-year decrease in nonfirearm homicide rates was also significant, but considerably lower at 0.976 (95% CI = 0.954, 0.997). Results suggest that 4585 (95% CI = 4427, 4723) lives were saved across 5 cities from 2001 to 2005 because of the FCA. Conclusions. Strength, timing and consistent decline suggest stricter gun control mediated by the FCA accounted for a significant decrease in homicide overall, and firearm homicide in particular, during the study period. PMID:24432917

  10. Multiple myeloma and infections: a population-based study on 9253 multiple myeloma patients.

    PubMed

    Blimark, Cecilie; Holmberg, Erik; Mellqvist, Ulf-Henrik; Landgren, Ola; Björkholm, Magnus; Hultcrantz, Malin; Kjellander, Christian; Turesson, Ingemar; Kristinsson, Sigurdur Y

    2015-01-01

    Infections are a major cause of morbidity and mortality in patients with multiple myeloma. To estimate the risk of bacterial and viral infections in multiple myeloma patients, we used population-based data from Sweden to identify all multiple myeloma patients (n=9253) diagnosed from 1988 to 2004 with follow up to 2007 and 34,931 matched controls. Cox proportional hazard models were used to estimate the risk of infections. Overall, multiple myeloma patients had a 7-fold (hazard ratio =7.1; 95% confidence interval = 6.8-7.4) risk of developing any infection compared to matched controls. The increased risk of developing a bacterial infection was 7-fold (7.1; 6.8-7.4), and for viral infections 10-fold (10.0; 8.9-11.4). Multiple myeloma patients diagnosed in the more recent calendar periods had significantly higher risk of infections compared to controls (P<0.001). At one year of follow up, infection was the underlying cause in 22% of deaths in multiple myeloma patients. Mortality due to infections remained constant during the study period. Our findings confirm that infections represent a major threat to multiple myeloma patients. The effect on infectious complications due to novel drugs introduced in the treatment of multiple myeloma needs to be established and trials on prophylactic measures are needed.

  11. Associating optical measurements of MEO and GEO objects using Population-Based Meta-Heuristic methods

    NASA Astrophysics Data System (ADS)

    Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.

    2016-11-01

    Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.

  12. Metformin use and survival from lung cancer: A population-based cohort study.

    PubMed

    Menamin, Úna C Mc; Cardwell, Chris R; Hughes, Carmel M; Murray, Liam M

    2016-04-01

    Preclinical evidence suggests that metformin, a widely prescribed anti-diabetic drug, may inhibit lung cancer progression. We investigated whether metformin use was associated with decreased risk of cancer-specific mortality in lung cancer patients. This study included newly diagnosed lung cancer patients (identified from English National Cancer Data Repository, 1998-2009) with type 2 diabetes (based on UK Clinical Practice Research Datalink prescriptions and diagnosis records). Lung cancer deaths occurring up to 2012 were identified using Office of National Statistics mortality data and the association between metformin use (before and after diagnosis) and risk of lung cancer-specific mortality was calculated using Cox regression models. In analysis of 533 patients, we found a weak non-significant reduction in lung cancer-specific mortality with metformin use after diagnosis (adjusted HR, 0.86; 95% CI, 0.68-1.09). No association was evident for metformin use before diagnosis and cancer-specific mortality in analysis of 1350 patients (adjusted HR, 0.97; 95% CI, 0.86, 1.11). Associations were similar by duration of use. In addition, after adjustment for potential confounders, there was little evidence of an association between the use of other anti-diabetic medications (either before or after diagnosis) and lung cancer-specific mortality; including sulfonylureas, insulin or other anti-diabetic medications (such as thiazolidinediones). Overall, the results from this population-based study provide little evidence of a protective association between metformin use and cancer mortality in lung cancer patients.

  13. Population-based survival-cure analysis of ER-negative breast cancer.

    PubMed

    Huang, Lan; Johnson, Karen A; Mariotto, Angela B; Dignam, James J; Feuer, Eric J

    2010-08-01

    This study investigated the trends over time in age and stage specific population-based survival of estrogen receptor negative (ER-) breast cancer patients by examining the fraction of cured patients and the median survival time for uncured patients. Cause-specific survival data from the Surveillance, Epidemiology, and End Results program for cases diagnosed during 1992-1998 were used in mixed survival cure models to evaluate the cure fraction and the extension in survival for uncured patients. Survival trends were compared with adjuvant chemotherapy data available from an overlapping patterns-of-care study. For stage II N+ disease, the largest increase in cure fraction was 44-60% (P = 0.0257) for women aged >or=70 in contrast to a 7-8% point increase for women aged <50 or 50-69 (P = 0.056 and 0.038, respectively). For women with stage III disease, the increases in the cure fraction were not statistically significant, although women aged 50-69 had a 10% point increase (P = 0.103). Increases in cure fraction correspond with increases in the use of adjuvant chemotherapy, particularly for the oldest age group. In this article, for the first time, we estimate the cure fraction for ER- patients. We notice that at age >o5r=70, the accelerated increase in cure fraction from 1992 to 1998 for women with stage II N+ compared with stage III suggests a selective benefit for chemotherapy in the lower stage group.

  14. Predicting successful aging in a population-based sample of georgia centenarians.

    PubMed

    Arnold, Jonathan; Dai, Jianliang; Nahapetyan, Lusine; Arte, Ankit; Johnson, Mary Ann; Hausman, Dorothy; Rodgers, Willard L; Hensley, Robert; Martin, Peter; Macdonald, Maurice; Davey, Adam; Siegler, Ilene C; Jazwinski, S Michal; Poon, Leonard W

    2010-01-01

    Used a population-based sample (Georgia Centenarian Study, GCS), to determine proportions of centenarians reaching 100 years as (1) survivors (43%) of chronic diseases first experienced between 0-80 years of age, (2) delayers (36%) with chronic diseases first experienced between 80-98 years of age, or (3) escapers (17%) with chronic diseases only at 98 years of age or older. Diseases fall into two morbidity profiles of 11 chronic diseases; one including cardiovascular disease, cancer, anemia, and osteoporosis, and another including dementia. Centenarians at risk for cancer in their lifetime tended to be escapers (73%), while those at risk for cardiovascular disease tended to be survivors (24%), delayers (39%), or escapers (32%). Approximately half (43%) of the centenarians did not experience dementia. Psychiatric disorders were positively associated with dementia, but prevalence of depression, anxiety, and psychoses did not differ significantly between centenarians and an octogenarian control group. However, centenarians were higher on the Geriatric Depression Scale (GDS) than octogenarians. Consistent with our model of developmental adaptation in aging, distal life events contribute to predicting survivorship outcome in which health status as survivor, delayer, or escaper appears as adaptation variables late in life.

  15. Inverse Association of Parkinson Disease With Systemic Lupus Erythematosus: A Nationwide Population-based Study.

    PubMed

    Liu, Feng-Cheng; Huang, Wen-Yen; Lin, Te-Yu; Shen, Chih-Hao; Chou, Yu-Ching; Lin, Cheng-Li; Lin, Kuen-Tze; Kao, Chia-Hung

    2015-11-01

    The effects of the inflammatory mediators involved in systemic lupus erythematous (SLE) on subsequent Parkinson disease have been reported, but no relevant studies have focused on the association between the 2 diseases. This nationwide population-based study evaluated the risk of Parkinson disease in patients with SLE.We identified 12,817 patients in the Taiwan National Health Insurance database diagnosed with SLE between 2000 and 2010 and compared the incidence rate of Parkinson disease among these patients with that among 51,268 randomly selected age and sex-matched non-SLE patients. A Cox multivariable proportional-hazards model was used to evaluate the risk factors of Parkinson disease in the SLE cohort.We observed an inverse association between a diagnosis of SLE and the risk of subsequent Parkinson disease, with the crude hazard ratio (HR) being 0.60 (95% confidence interval 0.45-0.79) and adjusted HR being 0.68 (95% confidence interval 0.51-0.90). The cumulative incidence of Parkinson disease was 0.83% lower in the SLE cohort than in the non-SLE cohort. The adjusted HR of Parkinson disease decreased as the follow-up duration increased and was decreased among older lupus patients with comorbidity.We determined that patients with SLE had a decreased risk of subsequent Parkinson disease. Further research is required to elucidate the underlying mechanism.

  16. Noise Annoyance in Urban Children: A Cross-Sectional Population-Based Study

    PubMed Central

    Grelat, Natacha; Houot, Hélène; Pujol, Sophie; Levain, Jean-Pierre; Defrance, Jérôme; Mariet, Anne-Sophie; Mauny, Frédéric

    2016-01-01

    Acoustical and non-acoustical factors influencing noise annoyance in adults have been well-documented in recent years; however, similar knowledge is lacking in children. The aim of this study was to quantify the annoyance caused by chronic ambient noise at home in children and to assess the relationship between these children′s noise annoyance level and individual and contextual factors in the surrounding urban area. A cross sectional population-based study was conducted including 517 children attending primary school in a European city. Noise annoyance was measured using a self-report questionnaire adapted for children. Six noise exposure level indicators were built at different locations at increasing distances from the child′s bedroom window using a validated strategic noise map. Multilevel logistic models were constructed to investigate factors associated with noise annoyance in children. Noise indicators in front of the child′s bedroom (p ≤ 0.01), family residential satisfaction (p ≤ 0.03) and socioeconomic characteristics of the individuals and their neighbourhood (p ≤ 0.05) remained associated with child annoyance. These findings illustrate the complex relationships between our environment, how we may perceive it, social factors and health. Better understanding of these relationships will undoubtedly allow us to more effectively quantify the actual effect of noise on human health. PMID:27801858

  17. Incidence of hidradenitis suppurativa and associated factors: a population-based study of Olmsted County, Minnesota.

    PubMed

    Vazquez, Benjamin G; Alikhan, Ali; Weaver, Amy L; Wetter, David A; Davis, Mark D

    2013-01-01

    There are no population-based incidence studies of hidradenitis suppurativa (HS). Using the medical record linkage system of the Rochester Epidemiology Project, we sought to determine the incidence of the disease, as well as other associations and characteristics, among HS patients diagnosed in Olmsted County, Minnesota, between 1968 and 2008. Incidence was estimated using the decennial census data for the county. Logistic regression models were fit to evaluate associations between patient characteristics and disease severity. A total of 268 incident cases were identified, with an overall annual age- and sex-adjusted incidence of 6.0 per 100,000. Age-adjusted incidence was significantly higher in women compared with men (8.2 (95% confidence interval (CI), 7.0-9.3) vs. 3.8 (95% CI, 3.0-4.7). The highest incidence was among young women aged 20-29 years (18.4 per 100,000). The incidence has risen over the past four decades, particularly among women. Women were more likely to have axillary and upper anterior torso involvement, whereas men were more likely to have perineal or perianal disease. In addition, 54.9% (140/255) patients were obese; 70.2% were current or former smokers; 42.9% carried a diagnosis of depression; 36.2% carried a diagnosis of acne; and 6% had pilonidal disease. Smoking and gender were significantly associated with more severe disease.

  18. Insomnia and the Risk of Atrial Fibrillation: A Population-Based Cohort Study

    PubMed Central

    Lee, Hsiu-Hao; Chen, Yueh-Chung; Chen, Jien-Jiun; Lo, Shih-Hsiang; Guo, Yue-Liang; Hu, Hsiao-Yun

    2017-01-01

    Background Although advancements in the treatment of atrial fibrillation have improved patient prognosis for this persistent condition, interest in atrial fibrillation development is growing. Of note is the fact that additional attention is being focused on the accompanying effect of insomnia. The aim of the study was to investigate the effects of insomnia on the risk of atrial fibrillation development. Methods This was a nationwide population-based retrospective cohort study using data from the Taiwan National health Insurance Research Database. We analyzed 64,421 insomnia cases and 128,842 matched controls without insomnia from January 1, 2000, to December 31, 2010. A Cox regression model was used to estimate the adjusted hazard ratios (HRs) and 95% confidence intervals (CI) for atrial fibrillation development. Results During the follow-up period, the incidence of atrial fibrillation development was significantly higher in the insomnia cases than in the comparison cohort (2.6% vs. 2.3%, p < 0.001). Insomnia was associated with an increased risk of atrial fibrillation (HR = 1.08, 95% CI: 1.01-1.14). Males, those > 65 years of age, and patients with peripheral artery disease who have insomnia had a higher rate of atrial fibrillation development. Conclusions The findings of this nationwide analysis support the hypothesis that insomnia is associated with a significant risk of atrial fibrillation development. PMID:28344420

  19. Sleep and academic performance in later adolescence: results from a large population-based study.

    PubMed

    Hysing, Mari; Harvey, Allison G; Linton, Steven J; Askeland, Kristin G; Sivertsen, Børge

    2016-06-01

    The aim of the current study was to assess the association between sleep duration and sleep patterns and academic performance in 16-19 year-old adolescents using registry-based academic grades. A large population-based study from Norway conducted in 2012, the youth@hordaland-survey, surveyed 7798 adolescents aged 16-19 years (53.5% girls). The survey was linked with objective outcome data on school performance. Self-reported sleep measures provided information on sleep duration, sleep efficiency, sleep deficit and bedtime differences between weekday and weekend. School performance [grade point average (GPA)] was obtained from official administrative registries. Most sleep parameters were associated with increased risk for poor school performance. After adjusting for sociodemographic information, short sleep duration and sleep deficit were the sleep measures with the highest odds of poor GPA (lowest quartile). Weekday bedtime was associated significantly with GPA, with adolescents going to bed between 22:00 and 23:00 hours having the best GPA. Also, delayed sleep schedule during weekends was associated with poor academic performance. The associations were somewhat reduced after additional adjustment for non-attendance at school, but remained significant in the fully adjusted models. In conclusion, the demonstrated relationship between sleep problems and poor academic performance suggests that careful assessment of sleep is warranted when adolescents are underperforming at school. Future studies are needed on the association between impaired sleep in adolescence and later functioning in adulthood.

  20. Population-based 3D genome structure analysis reveals driving forces in spatial genome organization

    PubMed Central

    Li, Wenyuan; Kalhor, Reza; Dai, Chao; Hao, Shengli; Gong, Ke; Zhou, Yonggang; Li, Haochen; Zhou, Xianghong Jasmine; Le Gros, Mark A.; Larabell, Carolyn A.; Chen, Lin; Alber, Frank

    2016-01-01

    Conformation capture technologies (e.g., Hi-C) chart physical interactions between chromatin regions on a genome-wide scale. However, the structural variability of the genome between cells poses a great challenge to interpreting ensemble-averaged Hi-C data, particularly for long-range and interchromosomal interactions. Here, we present a probabilistic approach for deconvoluting Hi-C data into a model population of distinct diploid 3D genome structures, which facilitates the detection of chromatin interactions likely to co-occur in individual cells. Our approach incorporates the stochastic nature of chromosome conformations and allows a detailed analysis of alternative chromatin structure states. For example, we predict and experimentally confirm the presence of large centromere clusters with distinct chromosome compositions varying between individual cells. The stability of these clusters varies greatly with their chromosome identities. We show that these chromosome-specific clusters can play a key role in the overall chromosome positioning in the nucleus and stabilizing specific chromatin interactions. By explicitly considering genome structural variability, our population-based method provides an important tool for revealing novel insights into the key factors shaping the spatial genome organization. PMID:26951677

  1. Long-term air pollution exposure and diabetes in a population-based Swiss cohort.

    PubMed

    Eze, Ikenna C; Schaffner, Emmanuel; Fischer, Evelyn; Schikowski, Tamara; Adam, Martin; Imboden, Medea; Tsai, Ming; Carballo, David; von Eckardstein, Arnold; Künzli, Nino; Schindler, Christian; Probst-Hensch, Nicole

    2014-09-01

    Air pollution is an important risk factor for global burden of disease. There has been recent interest in its possible role in the etiology of diabetes mellitus. Experimental evidence is suggestive, but epidemiological evidence is limited and mixed. We therefore explored the association between air pollution and prevalent diabetes, in a population-based Swiss cohort. We did cross-sectional analyses of 6392 participants of the Swiss Cohort Study on Air Pollution and Lung and Heart Diseases in Adults [SAPALDIA], aged between 29 and 73 years. We used estimates of average individual home outdoor PM10 [particulate matter <10μm in diameter] and NO2 [nitrogen dioxide] exposure over the 10 years preceding the survey. Their association with diabetes was modeled using mixed logistic regression models, including participants' study area as random effect, with incremental adjustment for confounders. There were 315 cases of diabetes (prevalence: 5.5% [95% confidence interval (CI): 2.8, 7.2%]). Both PM10 and NO2 were associated with prevalent diabetes with respective odds ratios of 1.40 [95% CI: 1.17, 1.67] and 1.19 [95% CI: 1.03, 1.38] per 10μg/m(3) increase in the average home outdoor level. Associations with PM10 were generally stronger than with NO2, even in the two-pollutant model. There was some indication that beta blockers mitigated the effect of PM10. The associations remained stable across different sensitivity analyses. Our study adds to the evidence that long term air pollution exposure is associated with diabetes mellitus. PM10 appears to be a useful marker of aspects of air pollution relevant for diabetes. This association can be observed at concentrations below air quality guidelines.

  2. A Population-Based Genomic Study of Inherited Metabolic Diseases Detected Through Newborn Screening

    PubMed Central

    Park, Kyoung-Jin; Park, Seungman; Lee, Eunhee; Park, Jong-Ho; Park, June-Hee; Park, Hyung-Doo; Lee, Soo-Youn

    2016-01-01

    Background A newborn screening (NBS) program has been utilized to detect asymptomatic newborns with inherited metabolic diseases (IMDs). There have been some bottlenecks such as false-positives and imprecision in the current NBS tests. To overcome these issues, we developed a multigene panel for IMD testing and investigated the utility of our integrated screening model in a routine NBS environment. We also evaluated the genetic epidemiologic characteristics of IMDs in a Korean population. Methods In total, 269 dried blood spots with positive results from current NBS tests were collected from 120,700 consecutive newborns. We screened 97 genes related to NBS in Korea and detected IMDs, using an integrated screening model based on biochemical tests and next-generation sequencing (NGS) called NewbornSeq. Haplotype analysis was conducted to detect founder effects. Results The overall positive rate of IMDs was 20%. We identified 10 additional newborns with preventable IMDs that would not have been detected prior to the implementation of our NGS-based platform NewbornSeq. The incidence of IMDs was approximately 1 in 2,235 births. Haplotype analysis demonstrated founder effects in p.Y138X in DUOXA2, p.R885Q in DUOX2, p.Y439C in PCCB, p.R285Pfs*2 in SLC25A13, and p.R224Q in GALT. Conclusions Through a population-based study in the NBS environment, we highlight the screening and epidemiological implications of NGS. The integrated screening model will effectively contribute to public health by enabling faster and more accurate IMD detection through NBS. This study suggested founder mutations as an explanation for recurrent IMD-causing mutations in the Korean population. PMID:27578510

  3. The absolute threshold of cone vision

    PubMed Central

    Koeing, Darran; Hofer, Heidi

    2013-01-01

    We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115

  4. Roots at the percolation threshold.

    PubMed

    Kroener, Eva; Ahmed, Mutez Ali; Carminati, Andrea

    2015-04-01

    The rhizosphere is the layer of soil around the roots where complex and dynamic interactions between plants and soil affect the capacity of plants to take up water. The physical properties of the rhizosphere are affected by mucilage, a gel exuded by roots. Mucilage can absorb large volumes of water, but it becomes hydrophobic after drying. We use a percolation model to describe the rewetting of dry rhizosphere. We find that at a critical mucilage concentration the rhizosphere becomes impermeable. The critical mucilage concentration depends on the radius of the soil particle size. Capillary rise experiments with neutron radiography prove that for concentrations below the critical mucilage concentration water could easily cross the rhizosphere, while above the critical concentration water could no longer percolate through it. Our studies, together with former observations of water dynamics in the rhizosphere, suggest that the rhizosphere is near the percolation threshold, where small variations in mucilage concentration sensitively alter the soil hydraulic conductivity. Is mucilage exudation a plant mechanism to efficiently control the rhizosphere conductivity and the access to water?

  5. Roots at the percolation threshold

    NASA Astrophysics Data System (ADS)

    Kroener, Eva; Ahmed, Mutez Ali; Carminati, Andrea

    2015-04-01

    The rhizosphere is the layer of soil around the roots where complex and dynamic interactions between plants and soil affect the capacity of plants to take up water. The physical properties of the rhizosphere are affected by mucilage, a gel exuded by roots. Mucilage can absorb large volumes of water, but it becomes hydrophobic after drying. We use a percolation model to describe the rewetting of dry rhizosphere. We find that at a critical mucilage concentration the rhizosphere becomes impermeable. The critical mucilage concentration depends on the radius of the soil particle size. Capillary rise experiments with neutron radiography prove that for concentrations below the critical mucilage concentration water could easily cross the rhizosphere, while above the critical concentration water could no longer percolate through it. Our studies, together with former observations of water dynamics in the rhizosphere, suggest that the rhizosphere is near the percolation threshold, where small variations in mucilage concentration sensitively alter the soil hydraulic conductivity. Is mucilage exudation a plant mechanism to efficiently control the rhizosphere conductivity and the access to water?

  6. Prevalence of Hidradenitis Suppurativa (HS): A Population-Based Study in Olmsted County, Minnesota

    PubMed Central

    Shahi, Varun; Alikhan, Ali; Vazquez, Benjamin G.; Weaver, Amy L.; Davis, Mark D.

    2014-01-01

    BACKGROUND/AIMS Hidradenitis suppurativa (HS) is a follicular occlusion disorder occurring in apocrine-rich regions of the skin. Estimates of the prevalence of this disorder have not been population-based. We sought to provide population-based information on the prevalence of HS in Olmsted County, Minnesota as of 1/1/2009. METHODS Rochester Epidemiology Project, a unique infrastructure that combines and makes accessible all medical records in Olmsted County since the 1960s, was used to collect population-based data on the prevalence of HS. RESULTS We identified 178 confirmed cases of HS that included 135 females and 43 males, and estimated the total sex- and age-adjusted prevalence in Olmsted County to be 127.8 per 100,000 or 0.13%. The total prevalence was significantly higher among women than men. CONCLUSION This study represents the first population-based investigation on the prevalence of HS. In this population-based cohort, HS was less prevalent than previous reports have suggested. PMID:25228133

  7. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?

    PubMed

    Little, M P

    2010-05-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose-response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported

  8. Threshold photodissociation of Cr+2

    NASA Astrophysics Data System (ADS)

    Lessen, D. E.; Asher, R. L.; Brucat, P. J.

    1991-08-01

    A one-photon photodissociation threshold for supersonically cooled Cr+2 is determined to be 2.13 eV. This threshold provides a strict upper limit to the adiabatic binding energy of the ground state of chromium dimer cation if the initial internal energy of the parent ion may be neglected. From the difference in the IPs of chromium atom and dimer, an upper limit to the dissociation of Cr2 is placed at 1.77 eV.

  9. Use of BPPV processes in Emergency Department Dizziness Presentations: A Population-Based Study

    PubMed Central

    Kerber, Kevin A.; Burke, James F.; Skolarus, Lesli E.; Meurer, William J.; Callaghan, Brian C.; Brown, Devin L.; Lisabeth, Lynda D.; McLaughlin, Thomas J.; Fendrick, A. Mark; Morgenstern, Lewis B.

    2013-01-01

    Objective A common cause of dizziness, benign paroxysmal positional vertigo (BPPV), is effectively diagnosed and cured with the Dix-Hallpike test (DHT) and the canalith repositioning maneuver (CRM). We aimed to describe the use of these processes in Emergency Departments (ED), to assess for trends in use over time, and to determine provider level variability in use. Design Prospective population-based surveillance study Setting EDs in Nueces County, Texas, January 15, 2008 to January 14, 2011 Subjects and Methods Adult patients discharged from EDs with dizziness, vertigo, or imbalance documented at triage. Clinical information was abstracted from source documents. A hierarchical logistic regression model adjusting for patient and provider characteristics was used to estimate trends in DHT use and provider level variability. Results 3,522 visits for dizziness were identified. A DHT was documented in 137 visits (3.9%). A CRM was documented in 8 visits (0.2%). Among patients diagnosed with BPPV, a DHT was documented in only 21.8% (34 of 156) and a CRM in 3.9% (6 of 156). In the hierarchical model (c statistic = 0.93), DHT was less likely to be used over time (odds ratio, 0.97, 95% CI [0.95, 0.99]) and the provider level explained 50% (ICC, 0.50) of the variance in the probability of DHT use. Conclusion BPPV is seldom examined for, and when diagnosed, infrequently treated in this ED population. DHT use is decreasing over time, and varies substantially by provider. Implementation research focused on BPPV care may be an opportunity to optimize management in ED dizziness presentations. PMID:23264119

  10. Radiotherapy and Survival in Prostate Cancer Patients: A Population-Based Study

    SciTech Connect

    Zhou, Esther H. Ellis, Rodney J.; Cherullo, Edward; Colussi, Valdir; Xu Fang; Chen Weidong; Gupta, Sanjay; Whalen, Christopher C.; Bodner, Donald; Resnick, Martin I.; Rimm, Alfred A.

    2009-01-01

    Purpose: To investigate the association of overall and disease-specific survival with the five standard treatment modalities for prostate cancer (CaP): radical prostatectomy (RP), brachytherapy (BT), external beam radiotherapy, androgen deprivation therapy, and no treatment (NT) within 6 months after CaP diagnosis. Methods and Materials: The study population included 10,179 men aged 65 years and older with incident CaP diagnosed between 1999 and 2001. Using the linked Ohio Cancer Incidence Surveillance System, Medicare, and death certificate files, overall and disease-specific survival through 2005 among the five clinically accepted therapies were analyzed. Results: Disease-specific survival rates were 92.3% and 23.9% for patients with localized vs. distant disease at 7 years, respectively. Controlling for age, race, comorbidities, stage, and Gleason score, results from the Cox multiple regression models indicated that the risk of CaP-specific death was significantly reduced in patients receiving RP or BT, compared with NT. For localized disease, compared with NT, in the monotherapy cohort, RP and BT were associated with reduced hazard ratios (HR) of 0.25 and 0.45 (95% confidence intervals 0.13-0.48 and 0.23-0.87, respectively), whereas in the combination therapy cohort, HR were 0.40 (0.17-0.94) and 0.46 (0.27-0.80), respectively. Conclusions: The present population-based study indicates that RP and BT are associated with improved survival outcomes. Further studies are warranted to improve clinical determinates in the selection of appropriate management of CaP and to improve predictive modeling for which patient subsets may benefit most from definitive therapy vs. conservative management and/or observation.

  11. Direct costs in impaired glucose regulation: results from the population-based Heinz Nixdorf Recall study

    PubMed Central

    Bächle, C; Claessen, H; Andrich, S; Brüne, M; Dintsios, C M; Slomiany, U; Roggenbuck, U; Jöckel, K H; Moebus, S; Icks, A

    2016-01-01

    Objective For the first time, this population-based study sought to analyze healthcare utilization and associated costs in people with normal fasting glycemia (NFG), impaired fasting glycemia (IFG), as well as previously undetected diabetes and previously diagnosed diabetes linking data from the prospective German Heinz Nixdorf Recall (HNR) study with individual claims data from German statutory health insurances. Research design and methods A total of 1709 participants of the HNR 5-year follow-up (mean age (SD) 64.9 (7.5) years, 44.5% men) were included in the study. Age-standardized and sex-standardized healthcare utilization and associated costs (reported as € for the year 2008, perspective of the statutory health insurance) were stratified by diabetes stage defined by the participants' self-report and fasting plasma glucose values. Cost ratios (CRs) were estimated using two-part regression models, adjusting for age, sex, sociodemographic variables and comorbidity. Results The mean total direct healthcare costs for previously diagnosed diabetes, previously undetected diabetes, IFG, and NFG were €2761 (95% CI 2378 to 3268), €2210 (1483 to 4279), €2035 (1732 to 2486) and €1810 (1634 to 2035), respectively. Corresponding age-adjusted and sex-adjusted CRs were 1.53 (1.30 to 1.80), 1.16 (0.91 to 1.47), and 1.09 (0.95 to 1.25) (reference: NFG). Inpatient, outpatient and medication costs varied in order between people with IFG and those with previously undetected diabetes. Conclusions The study provides claims-based detailed cost data in well-defined glucose metabolism subgroups. CRs of individuals with IFG and previously undetected diabetes were surprisingly low. Data are important for the model-based evaluation of screening programs and interventions that are aimed either to prevent diabetes onset or to improve diabetes therapy as well. PMID:27252871

  12. Automatic Threshold Detector Techniques

    DTIC Science & Technology

    1976-07-15

    TECHNIQUES Contract No. DAAH01-76-C-0363 ER76-4208 15 July 1976 Prepared for: HEADQUA RTERS U.S. Army Missile Command Redstone Arsenal, Alabama 35809 j...rain cross section each FFT filter) a MUT, MUCL , KCL, MUN, MUCLF (2 ), MUl, MUWI= MUW2 where MUT = Target cross section (M2), MUCL = total ground...variable is currently not used by the program. Since there is yet no point clutter model, MUCL represents the fluctuating component. Until a point plus

  13. Usefulness of data from magnetic resonance imaging to improve prediction of dementia: population based cohort study

    PubMed Central

    Stephan, Blossom C M; Tzourio, Christophe; Auriacombe, Sophie; Amieva, Hélène; Dufouil, Carole; Alpérovitch, Annick

    2015-01-01

    Objective To determine whether the addition of data derived from magnetic resonance imaging (MRI) of the brain to a model incorporating conventional risk variables improves prediction of dementia over 10 years of follow-up. Design Population based cohort study of individuals aged ≥65. Setting The Dijon magnetic resonance imaging study cohort from the Three-City Study, France. Participants 1721 people without dementia who underwent an MRI scan at baseline and with known dementia status over 10 years’ follow-up. Main outcome measure Incident dementia (all cause and Alzheimer’s disease). Results During 10 years of follow-up, there were 119 confirmed cases of dementia, 84 of which were Alzheimer’s disease. The conventional risk model incorporated age, sex, education, cognition, physical function, lifestyle (smoking, alcohol use), health (cardiovascular disease, diabetes, systolic blood pressure), and the apolipoprotein genotype (C statistic for discrimination performance was 0.77, 95% confidence interval 0.71 to 0.82). No significant differences were observed in the discrimination performance of the conventional risk model compared with models incorporating data from MRI including white matter lesion volume (C statistic 0.77, 95% confidence interval 0.72 to 0.82; P=0.48 for difference of C statistics), brain volume (0.77, 0.72 to 0.82; P=0.60), hippocampal volume (0.79, 0.74 to 0.84; P=0.07), or all three variables combined (0.79, 0.75 to 0.84; P=0.05). Inclusion of hippocampal volume or all three MRI variables combined in the conventional model did, however, lead to significant improvement in reclassification measured by using the integrated discrimination improvement index (P=0.03 and P=0.04) and showed increased net benefit in decision curve analysis. Similar results were observed when the outcome was restricted to Alzheimer’s disease. Conclusions Data from MRI do not significantly improve discrimination performance in prediction of all cause dementia

  14. Identifying Thresholds for Ecosystem-Based Management

    PubMed Central

    Samhouri, Jameal F.; Levin, Phillip S.; Ainsworth, Cameron H.

    2010-01-01

    Background One of the greatest obstacles to moving ecosystem-based management (EBM) from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. Methodology/Principal Findings To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution) at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity) and functional (e.g., resilience) attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1) fishing and (2) nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. Conclusions/Significance For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management. PMID:20126647

  15. Cost–effectiveness thresholds: pros and cons

    PubMed Central

    Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-01-01

    Abstract Cost–effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost–effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost–effectiveness thresholds allow cost–effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization’s Commission on Macroeconomics in Health suggested cost–effectiveness thresholds based on multiples of a country’s per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this – in addition to uncertainty in the modelled cost–effectiveness ratios – can lead to the wrong decision on how to spend health-care resources. Cost–effectiveness information should be used alongside other considerations – e.g. budget impact and feasibility considerations – in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost–effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair. PMID:27994285

  16. Density Threshold for Edge Poloidal Flow Generation

    NASA Astrophysics Data System (ADS)

    Daniels, N.; Ware, A. S.; Newman, D. E.; Hidalgo, C.

    2004-11-01

    A numerical transport model is used to examine a density threshold for the onset of an edge poloidal velocity shear layer in toroidal devices. This work is motivated by recent experimental results from the TJ-II stellarator which indicate a critical density threshold for the development of an edge poloidal velocity shear layer [1]. Edge shear-flow layers are commonly observed in toroidal confinement devices, even in L-mode discharges. The numerical transport model has been used to examine internal transport barriers and front propagation of internal transport barriers [2]. The transport model couples together density, ion temperature, electron temperature, poloidal flow, toroidal flow, radial electric field, and a fluctuation envelope equation which includes a shear-suppression factor. In this work, we present results from a series of cases using parameters that are typical of TJ-II discharges. The dependence of the critical density threshold on flow damping and Reynolds stress drive is investigated. [1] C. Hidalgo, M. A. Pedrosa, L. Garcia, and A. Ware, "Direct experimental evidence of coupling between sheared flows development and increasing in level of turbulence in the TJ-II stellarator", submitted to Phys. Rev. E. [2] D. E. Newman, B. A. Carreras, D. Lopez-Bruna, P. H. Diamond, and V. B. Lebedev, Phys. Plasmas 5, 938 (1998).

  17. Unstable particles near threshold

    NASA Astrophysics Data System (ADS)

    Chway, Dongjin; Jung, Tae Hyun; Kim, Hyung Do

    2016-07-01

    We explore the physics of unstable particles when the mother particle's mass is approximately the sum of the masses of its daughter particles. In this case, the conventional wave function renormalization factor used for the narrow width approximation is ill-defined. We propose a simple resolution of the problem that allows the use of the narrow width approximation by defining the wave function renormalization factor and the branching ratio in terms of the spectral density. We test new definitions by calculating the cross section in the Higgs portal model and a significant improvement is obtained. Meanwhile, no single decay width can be assigned to the unstable particles and non-exponential decay occurs at all time scales.

  18. Femtosecond damage threshold of multilayer metal films

    NASA Astrophysics Data System (ADS)

    Ibrahim, Wael M. G.; Elsayed-Ali, Hani E.; Shinn, Michelle D.; Bonner, Carl E.

    2003-05-01

    With the availability of terawatt laser systems with subpicosecond pulses, laser damage to optical components has become the limiting factor for further increases in the output peak power. Evaluation of different material structures in accordance to their suitability for high-power laser systems is essential. Multi-shot damage experiments, using 110 fs laser pulses at 800 nm, on polycrystalline single layer gold films and multi-layer (gold-vanadium, and gold-titanium) films were conducted. The laser incident fluence was varied, in both cases, from 0.1 to 0.6 J/cm2. No evidence of surface damage was apparent in the gold sample up to a fluence of 0.3 J/cm2. The multilayer sample experienced the onset of surface damage at the lowest fluence value used of 0.1 J/cm2. Damage results are in contrast with the time resolved ultrafast thermoreflectivity measurements that revealed a reduction of the thermoreflectivity signal for the multilayer films. This decrease in the thermoreflectivity signal signifies a reduction in the surface electron temperature that should translate in a lower lattice temperature at the later stage. Hence, one should expect a higher damage threshold for the multilayer samples. Comparison of the experimental results with the predictions of the Two-Temperature Model (TTM) is presented. The damage threshold of the single layer gold film corresponds to the melting threshold predicted by the model. In contrast to the single layer gold film, the multi-layer sample damaged at almost one third the damage threshold predicted by the TTM model. Possible damage mechanisms leading to the early onset of damage for the multilayer films are discussed.

  19. The effect of static pressure on the inertial cavitation threshold.

    PubMed

    Bader, Kenneth B; Raymond, Jason L; Mobley, Joel; Church, Charles C; Felipe Gaitan, D

    2012-08-01

    The amplitude of the acoustic pressure required to nucleate a gas or vapor bubble in a fluid, and to have that bubble undergo an inertial collapse, is termed the inertial cavitation threshold. The magnitude of the inertial cavitation threshold is typically limited by mechanisms other than homogeneous nucleation such that the theoretical maximum is never achieved. However, the onset of inertial cavitation can be suppressed by increasing the static pressure of the fluid. The inertial cavitation threshold was measured in ultrapure water at static pressures up to 30 MPa (300 bars) by exciting a radially symmetric standing wave field in a spherical resonator driven at a resonant frequency of 25.5 kHz. The threshold was found to increase linearly with the static pressure; an exponentially decaying temperature dependence was also found. The nature and properties of the nucleating mechanisms were investigated by comparing the measured thresholds to an independent analysis of the particulate content and available models for nucleation.

  20. Population-based analysis of Alzheimer’s disease risk alleles implicates genetic interactions

    PubMed Central

    Ebbert, Mark T. W.; Ridge, Perry G.; Wilson, Andrew R.; Sharp, Aaron R.; Bailey, Matthew; Norton, Maria C.; Tschanz, JoAnn T.; Munger, Ronald G.; Corcoran, Christopher D.; Kauwe, John S. K.

    2013-01-01

    Background Reported odds ratios and population attributable fractions (PAF) for late-onset Alzheimer’s disease (LOAD) risk loci (BIN1, ABCA7, CR1, MS4A4E, CD2AP, PICALM, MS4A6A, CD33, and CLU) come from clinically ascertained samples. Little is known about the combined PAF for these LOAD risk alleles and the utility of these combined markers for case-control prediction. Here we evaluate these loci in a large population-based sample to estimate PAF and explore the effects of additive and non-additive interactions on LOAD status prediction performance. Methods 2,419 samples from the Cache County Memory Study were genotyped for APOE and nine LOAD risk loci from AlzGene.org. We used logistic regression and ROC analysis to assess the LOAD status prediction performance of these loci using additive and non-additive models, and compared ORs and PAFs between AlzGene.org and Cache County. Results Odds ratios were comparable between Cache County and AlzGene.org when identical SNPs were genotyped. PAFs from AlzGene.org ranged from 2.25–37%; those from Cache County ranged from 0.05–20%. Including non-APOE alleles significantly improved LOAD status prediction performance (AUC = 0.80) over APOE alone (AUC = 0.78) when not constrained to an additive relationship (p < 0.03). We identified potential allelic interactions (p-values uncorrected): CD33-MS4A4E (Synergy Factor = 5.31; p < 0.003) and CLU-MS4A4E (SF = 3.81; p < 0.016). Conclusions While non-additive interactions between loci significantly improve diagnostic ability, the improvement does not reach the desired sensitivity or specificity for clinical use. Nevertheless, these results suggest that understanding gene-gene interactions may be important in resolving Alzheimer’s disease etiology. PMID:23954108

  1. Early Cognitive Deficits in Type 2 Diabetes: A Population-Based Study

    PubMed Central

    Marseglia, Anna; Fratiglioni, Laura; Laukka, Erika J.; Santoni, Giola; Pedersen, Nancy L.; Bäckman, Lars; Xu, Weili

    2016-01-01

    Evidence links type 2 diabetes to dementia risk. However, our knowledge on the initial cognitive deficits in diabetic individuals and the factors that might promote such deficits is still limited. This study aimed to identify the cognitive domains initially impaired by diabetes and the factors that play a role in this first stage. Within the population-based Swedish National Study on Aging and Care–Kungsholmen, 2305 cognitively intact participants aged ≥60 y were identified. Attention/working memory, perceptual speed, category fluency, letter fluency, semantic memory, and episodic memory were assessed. Diabetes (controlled and uncontrolled) and prediabetes were ascertained by clinicians, who also collected information on vascular disorders (hypertension, heart diseases, and stroke) and vascular risk factors (VRFs, including smoking and overweight/obesity). Data were analyzed with linear regression models. Overall, 196 participants (8.5%) had diabetes, of which 144 (73.5%) had elevated glycaemia (uncontrolled diabetes); 571 (24.8%) persons had prediabetes. In addition, diabetes, mainly uncontrolled, was related to lower performance in perceptual speed (β – 1.10 [95% CI – 1.98, – 0.23]), category fluency (β – 1.27 [95% CI – 2.52, – 0.03]), and digit span forward (β – 0.35 [95% CI – 0.54, – 0.17]). Critically, these associations were present only among APOE ɛ4 non–carriers. The associations of diabetes with perceptual speed and category fluency were present only among participants with VRFs or vascular disorders. Diabetes, especially uncontrolled diabetes, is associated with poorer performance in perceptual speed, category fluency, and attention/primary memory. VRFs, vascular disorders, and APOE status play a role in these associations. PMID:27314527

  2. Physical comorbidities in men with mood and anxiety disorders: a population-based study

    PubMed Central

    2013-01-01

    Background The mind-body nexus has been a topic of growing interest. Further data are however required to understand the specific relationship between mood and anxiety disorders and individual physical health conditions, and to verify whether these psychiatric disorders are linked to overall medical burden. Methods This study examined data collected from 942 men, 20 to 97 years old, participating in the Geelong Osteoporosis Study. A lifetime history of mood and anxiety disorders was identified using the Structured Clinical Interview for DSM-IV-TR Research Version, Non-patient edition (SCID-I/NP). The presence of medical conditions (lifetime) was self-reported and confirmed by medical records, medication use or clinical data. Anthropometric measurements and socioeconomic status (SES) were determined and information on medication use and lifestyle was obtained via questionnaire. Logistic regression models were used to test the associations. Results After adjustment for age, socioeconomic status, and health risk factors (body mass index, physical activity and smoking), mood disorders were associated with gastro oesophageal reflux disease (GORD), recurrent headaches, blackouts and/or epilepsy, liver disorders and pulmonary disease in older people, whilst anxiety disorders were significantly associated with thyroid, GORD and other gastrointestinal disorders, and psoriasis. Increased odds of high medical burden were associated with both mood and anxiety disorders. Conclusions Our study provides further population-based evidence supporting the link between mental and physical illness in men. Understanding these associations is not only necessary for individual management, but also to inform the delivery of health promotion messages and health care. PMID:23618390

  3. Healthcare Costs Attributable to Hypertension: Canadian Population-Based Cohort Study.

    PubMed

    Weaver, Colin G; Clement, Fiona M; Campbell, Norm R C; James, Matthew T; Klarenbach, Scott W; Hemmelgarn, Brenda R; Tonelli, Marcello; McBrien, Kerry A

    2015-09-01

    Accurately documenting the current and future costs of hypertension is required to fully understand the potential economic impact of currently available and future interventions to prevent and treat hypertension. The objective of this work was to calculate the healthcare costs attributable to hypertension in Canada and to project these costs to 2020. Using population-based administrative data for the province of Alberta, Canada (>3 million residents) from 2002 to 2010, we identified individuals with and without diagnosed hypertension. We calculated their total healthcare costs and estimated costs attributable to hypertension using a regression model adjusting for comorbidities and sociodemographic factors. We then extrapolated hypertension-attributable costs to the rest of Canada and projected costs to the year 2020. Twenty-one percent of adults in Alberta had diagnosed hypertension in 2010, with a projected increase to 27% by 2020. The average individual with hypertension had annual healthcare costs of $5768, of which $2341 (41%) were attributed to hypertension. In Alberta, the healthcare costs attributable to hypertension were $1.4 billion in 2010. In Canada, the hypertension-attributable costs were estimated to be $13.9 billion in 2010, rising to $20.5 billion by 2020. The increase was ascribed to demographic changes (52%), increasing prevalence (16%), and increasing per-patient costs (32%). Hypertension accounts for a significant proportion of healthcare spending (10.2% of the Canadian healthcare budget) and is projected to rise even further. Interventions to prevent and treat hypertension may play a role in limiting this cost growth.

  4. The changing face of thyroid cancer in a population-based cohort.

    PubMed

    Pathak, K Alok; Leslie, William D; Klonisch, Thomas C; Nason, Richard W

    2013-08-01

    In North America, the incidence of thyroid cancer is increasing by over 6% per year. We studied the trends and factors influencing thyroid cancer incidence, its clinical presentation, and treatment outcome during 1970-2010 in a population-based cohort of 2306 consecutive thyroid cancers in Canada, that was followed up for a median period of 10.5 years. Disease-specific survival (DSS) and disease-free survival were estimated by the Kaplan-Meier method and the independent influence of various prognostic factors was evaluated by Cox proportional hazard models. Cumulative incidence of deaths resulting from thyroid cancer was calculated by competing risk analysis. A P-value <0.05 was considered to indicate statistical significance. The age standardized incidence of thyroid cancer by direct method increased from 2.52/100,000 (1970) to 9.37/100,000 (2010). Age at diagnosis, gender distribution, tumor size, and initial tumor stage did not change significantly during this period. The proportion of papillary thyroid cancers increased significantly (P < 0.001) from 58% (1970-1980) to 85.9% (2000-2010) while that of anaplastic cancer fell from 5.7% to 2.1% (P < 0.001). Ten-year DSS improved from 85.4% to 95.6%, and was adversely influenced by anaplastic histology (hazard ratio [HR] = 8.7; P < 0.001), male gender (HR = 1.8; P = 0.001), TNM stage IV (HR = 8.4; P = 0.001), incomplete surgical resection (HR = 2.4; P = 0.002), and age at diagnosis (HR = 1.05 per year; P < 0.001). There was a 373% increase in the incidence of thyroid cancer in Manitoba with a marked improvement in the thyroid cancer-specific survival that was independent of changes in patient demographics, tumor stage, or treatment practices, and is largely attributed to the declining proportion of anaplastic thyroid cancers.

  5. Pediatric Sleep Disorders and Special Educational Need at 8 Years: A Population-Based Cohort Study

    PubMed Central

    Rao, Trupti; Xu, Linzhi

    2012-01-01

    OBJECTIVES: To examine associations between sleep-disordered breathing (SDB) and behavioral sleep problems (BSPs) through 5 years of age and special educational need (SEN) at 8 years. METHODS: Parents in the Avon Longitudinal Study of Parents and Children reported on children’s snoring, witnessed apnea, and mouth-breathing at 6, 18, 30, 42, and 57 months, from which SDB symptom trajectories, or clusters, were derived. BSPs were based on report of ≥5 of 7 sleep behaviors at each of the 18-, 30-, 42-, and 57-month questionnaires. Parent report of SEN (yes/no) at 8 years was available for 11 049 children with SDB data and 11 467 children with BSP data. Multivariable logistic regression models were used to predict SEN outcome by SDB cluster and by cumulative report of SEN. RESULTS: Controlling for 16 putative confounders, previous history of SDB and BSPs was significantly associated with an SEN. BSPs were associated with a 7% increased odds of SEN (95% confidence interval [CI] 1.01–1.15), for each ∼1-year interval at which a BSP was reported. SDB, overall, was associated with a near 40% increased odds of SEN (95% CI 1.18–1.62). Children in the worst symptom cluster were 60% more likely to have an SEN (95% CI 1.23–2.08). CONCLUSIONS: In this population-based longitudinal study, history of either SDB or BSPs in the first 5 years of life was associated with increased likelihood of SEN at 8 years of age. Findings highlight the need for pediatric sleep disorder screening by early interventionists, early childhood educators, and health professionals. PMID:22945405

  6. Medication use and survival in diabetic patients with kidney cancer: A population-based cohort study.

    PubMed

    Nayan, Madhur; Macdonald, Erin M; Juurlink, David N; Austin, Peter C; Finelli, Antonio; Kulkarni, Girish S; Hamilton, Robert J

    2016-11-01

    Survival rates in kidney cancer have improved little over time, and diabetes may be an independent risk factor for poor survival in kidney cancer. We sought to determine whether medications with putative anti-neoplastic properties (statins, metformin and non-steroidal anti-inflammatory drugs (NSAIDs)) are associated with survival in diabetics with kidney cancer. We conducted a population-based cohort study utilizing linked healthcare databases in Ontario, Canada. Patients were aged 66 or older with newly diagnosed diabetes and a subsequent diagnosis of incident kidney cancer. Receipt of metformin, statins or NSAIDs was defined using prescription claims. The primary outcome was all-cause mortality and the secondary outcome was cancer-specific mortality. We used multivariable Cox proportional hazard regression, with medication use modeled with time-varying and cumulative exposure analyses to account for intermittent use. During the 14-year study period, we studied 613 patients. Current statin use was associated with a markedly reduced risk of death from any cause (adjusted hazard ratio 0.74; 95% CI 0.59-0.91) and death due to kidney cancer (adjusted hazard ratio 0.71; 95% CI 0.51-0.97). However, survival was not associated with current use of metformin or NSAIDs, or cumulative exposure to any of the medications studied. Among diabetic patients with kidney cancer, survival outcomes are associated with active statin use, rather than total cumulative use. These findings support the use of randomized trials to confirm whether diabetics with kidney cancer should be started on a statin at the time of cancer diagnosis to improve survival outcomes.

  7. Incidence and Mortality of Obstructive Lung Disease in Rheumatoid Arthritis: A Population-Based Study

    PubMed Central

    Nannini, Carlotta; Medina-Velasquez, Yimy F.; Achenbach, Sara J.; Crowson, Cynthia S.; Ryu, Jay H.; Vassallo, Robert; Gabriel, Sherine E.; Matteson, Eric L.; Bongartz, Tim

    2014-01-01

    OBJECTIVE Pulmonary disease represents an important extra-articular manifestation of rheumatoid arthritis (RA). While the association of RA and interstitial lung disease is widely acknowledged, obstructive lung disease (OLD) in RA is less well understood. We therefore aimed to assess incidence, risk factors and mortality of OLD in patients with RA. METHODS We examined a population-based incident cohort of patients with RA and a comparison cohort of individuals without RA. OLD was defined using a strict composite criterion. Cox-proportional hazards models were used to compare OLD incidence between the RA and comparator cohort, to investigate risk factors and to explore the impact of OLD on patient survival. RESULTS 594 patients with RA and 596 subjects without RA were followed for a mean of 16.3 and 19.4 years, respectively. The lifetime risk of developing OLD was 9.6% for RA patients and 6.2% for subjects without RA; hazard ratio (HR) 1.54 (95% CI 1.01 to 2.34). The risk of developing OLD was higher among male patients, current or former smokers and for individuals with more severe RA. Survival of RA patients diagnosed with OLD was worse compared to those without OLD (HR 2.09, 95% CI 1.47 to 2.97). CONCLUSION Patients with RA are at higher risk of developing OLD, which is significantly associated with premature mortality. Effective diagnostic and therapeutic strategies to detect and manage OLD in patients with RA may help to improve survivorship in these patients. PMID:23436637

  8. Recurrent Clostridium difficile infection among Medicare patients in nursing homes: A population-based cohort study.

    PubMed

    Zilberberg, Marya D; Shorr, Andrew F; Jesdale, William M; Tjia, Jennifer; Lapane, Kate

    2017-03-01

    We explored the epidemiology and outcomes of Clostridium difficile infection (CDI) recurrence among Medicare patients in a nursing home (NH) whose CDI originated in acute care hospitals.We conducted a retrospective, population-based matched cohort combining Medicare claims with Minimum Data Set 3.0, including all hospitalized patients age ≥65 years transferred to an NH after hospitalization with CDI 1/2011-11/2012. Incident CDI was defined as ICD-9-CM code 008.45 with no others in prior 60 days. CDI recurrence was defined as (within 60 days of last day of CDI treatment): oral metronidazole, oral vancomycin, or fidaxomicin for ≥3 days in part D file; or an ICD-9-CM code for CDI (008.45) during a rehospitalization. Cox proportional hazards and linear models, adjusted for age, gender, race, and comorbidities, examined mortality within 60 days and excess hospital days and costs, in patients with recurrent CDI compared to those without.Among 14,472 survivors of index CDI hospitalization discharged to an NH, 4775 suffered a recurrence. Demographics and clinical characteristics at baseline were similar, as was the risk of death (24.2% with vs 24.4% without). Median number of hospitalizations was 2 (IQR 1-3) among those with and 0 (IQR 0-1) among those without recurrence. Adjusted excess hospital days per patient were 20.3 (95% CI 19.1-21.4) and Medicare reimbursements $12,043 (95% CI $11,469-$12,617) in the group with a recurrence.Although recurrent CDI did not increase the risk of death, it was associated with a far higher risk of rehospitalization, excess hospital days, and costs to Medicare.

  9. Long-term sedative use among community-dwelling adults: a population-based analysis

    PubMed Central

    Weymann, Deirdre; Gladstone, Emilie J.; Smolina, Kate; Morgan, Steven G.

    2017-01-01

    Background: Chronic use of benzodiazepines and benzodiazepine-like sedatives (z-drugs) presents substantial risks to people of all ages. We sought to assess trends in long-term sedative use among community-dwelling adults in British Columbia. Methods: Using population-based linked administrative databases, we examined longitudinal trends in age-standardized rates of sedative use among different age groups of community-dwelling adults (age ≥ 18 yr), from 2004 to 2013. For each calendar year, we classified adults as nonusers, short-term users, or long-term users of sedatives based on their patterns of sedative dispensation. For calendar year 2013, we applied cross-sectional analysis and estimated logistic regression models to identify health and socioeconomic risk factors associated with long-term sedative use. Results: More than half (53.4%) of long-term users of sedatives in British Columbia are between ages 18 and 64 years (young and middle-aged adults). From 2004 to 2013, long-term sedative