Science.gov

Sample records for population-based threshold model

  1. Investigations on hydrogen isotope ratios of endogenous urinary steroids: reference-population-based thresholds and proof-of-concept.

    PubMed

    Piper, Thomas; Thomas, Andreas; Thevis, Mario; Saugy, Martial

    2012-09-01

    Carbon isotope ratio (CIR) analysis has been routinely and successfully used in sports drug testing for many years to uncover the misuse of endogenous steroids. One limitation of the method is the availability of steroid preparations exhibiting CIRs equal to endogenous steroids. To overcome this problem, hydrogen isotope ratios (HIR) of endogenous urinary steroids were investigated as a potential complement; results obtained from a reference population of 67 individuals are presented herein. An established sample preparation method was modified and improved to enable separate measurements of each analyte of interest where possible. From the fraction of glucuronidated steroids; pregnanediol, 16-androstenol, 11-ketoetiocholanolone, androsterone (A), etiocholanolone (E), dehydroepiandrosterone (D), 5α- and 5β-androstanediol, testosterone and epitestosterone were included. In addition, sulfate conjugates of A, E, D, epiandrosterone and 17α- and 17β-androstenediol were considered and analyzed after acidic solvolysis. The obtained results enabled the calculation of the first reference-population-based thresholds for HIR of urinary steroids that can readily be applied to routine doping control samples. Proof-of-concept was accomplished by investigating urine specimens collected after a single oral application of testosterone-undecanoate. The HIR of most testosterone metabolites were found to be significantly influenced by the exogenous steroid beyond the established threshold values. Additionally, one regular doping control sample with an extraordinary testosterone/epitestosterone ratio of 100 without suspicious CIR was subjected to the complementary methodology of HIR analysis. The HIR data eventually provided evidence for the exogenous origin of urinary testosterone metabolites. Despite further investigations on HIR being advisable to corroborate the presented reference-population-based thresholds, the developed method proved to be a new tool supporting modern

  2. Investigations on hydrogen isotope ratios of endogenous urinary steroids: reference-population-based thresholds and proof-of-concept.

    PubMed

    Piper, Thomas; Thomas, Andreas; Thevis, Mario; Saugy, Martial

    2012-09-01

    Carbon isotope ratio (CIR) analysis has been routinely and successfully used in sports drug testing for many years to uncover the misuse of endogenous steroids. One limitation of the method is the availability of steroid preparations exhibiting CIRs equal to endogenous steroids. To overcome this problem, hydrogen isotope ratios (HIR) of endogenous urinary steroids were investigated as a potential complement; results obtained from a reference population of 67 individuals are presented herein. An established sample preparation method was modified and improved to enable separate measurements of each analyte of interest where possible. From the fraction of glucuronidated steroids; pregnanediol, 16-androstenol, 11-ketoetiocholanolone, androsterone (A), etiocholanolone (E), dehydroepiandrosterone (D), 5α- and 5β-androstanediol, testosterone and epitestosterone were included. In addition, sulfate conjugates of A, E, D, epiandrosterone and 17α- and 17β-androstenediol were considered and analyzed after acidic solvolysis. The obtained results enabled the calculation of the first reference-population-based thresholds for HIR of urinary steroids that can readily be applied to routine doping control samples. Proof-of-concept was accomplished by investigating urine specimens collected after a single oral application of testosterone-undecanoate. The HIR of most testosterone metabolites were found to be significantly influenced by the exogenous steroid beyond the established threshold values. Additionally, one regular doping control sample with an extraordinary testosterone/epitestosterone ratio of 100 without suspicious CIR was subjected to the complementary methodology of HIR analysis. The HIR data eventually provided evidence for the exogenous origin of urinary testosterone metabolites. Despite further investigations on HIR being advisable to corroborate the presented reference-population-based thresholds, the developed method proved to be a new tool supporting modern

  3. Threshold models in radiation carcinogenesis

    SciTech Connect

    Hoel, D.G.; Li, P.

    1998-09-01

    Cancer incidence and mortality data from the atomic bomb survivors cohort has been analyzed to allow for the possibility of a threshold dose response. The same dose-response models as used in the original papers were fit to the data. The estimated cancer incidence from the fitted models over-predicted the observed cancer incidence in the lowest exposure group. This is consistent with a threshold or nonlinear dose-response at low-doses. Thresholds were added to the dose-response models and the range of possible thresholds is shown for both solid tumor cancers as well as the different leukemia types. This analysis suggests that the A-bomb cancer incidence data agree more with a threshold or nonlinear dose-response model than a purely linear model although the linear model is statistically equivalent. This observation is not found with the mortality data. For both the incidence data and the mortality data the addition of a threshold term significantly improves the fit to the linear or linear-quadratic dose response for both total leukemias and also for the leukemia subtypes of ALL, AML, and CML.

  4. Universal Screening for Emotional and Behavioral Problems: Fitting a Population-Based Model

    ERIC Educational Resources Information Center

    Schanding, G. Thomas, Jr.; Nowell, Kerri P.

    2013-01-01

    Schools have begun to adopt a population-based method to conceptualizing assessment and intervention of students; however, little empirical evidence has been gathered to support this shift in service delivery. The present study examined the fit of a population-based model in identifying students' behavioral and emotional functioning using a…

  5. Population based models of cortical drug response: insights from anaesthesia

    PubMed Central

    Bojak, Ingo; Liley, David T. J.

    2008-01-01

    A great explanatory gap lies between the molecular pharmacology of psychoactive agents and the neurophysiological changes they induce, as recorded by neuroimaging modalities. Causally relating the cellular actions of psychoactive compounds to their influence on population activity is experimentally challenging. Recent developments in the dynamical modelling of neural tissue have attempted to span this explanatory gap between microscopic targets and their macroscopic neurophysiological effects via a range of biologically plausible dynamical models of cortical tissue. Such theoretical models allow exploration of neural dynamics, in particular their modification by drug action. The ability to theoretically bridge scales is due to a biologically plausible averaging of cortical tissue properties. In the resulting macroscopic neural field, individual neurons need not be explicitly represented (as in neural networks). The following paper aims to provide a non-technical introduction to the mean field population modelling of drug action and its recent successes in modelling anaesthesia. PMID:19003456

  6. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  7. Models of population-based analyses for data collected from large extended families

    PubMed Central

    Lee, Elisa T.; Howard, Barbara V.; Fabsitz, Richard R.; Devereux, Richard B.; MacCluer, Jean W.; Laston, Sandra; Comuzzie, Anthony G.; Shara, Nawar M.; Welty, Thomas K.

    2014-01-01

    Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim. PMID:20882324

  8. Mixture models for cancer survival analysis: application to population-based data with covariates.

    PubMed

    De Angelis, R; Capocaccia, R; Hakulinen, T; Soderman, B; Verdecchia, A

    1999-02-28

    The interest in estimating the probability of cure has been increasing in cancer survival analysis as the curability of many cancer diseases is becoming a reality. Mixture survival models provide a way of modelling time to death when cure is possible, simultaneously estimating death hazard of fatal cases and the proportion of cured case. In this paper we propose an application of a parametric mixture model to relative survival rates of colon cancer patients from the Finnish population-based cancer registry, and including major survival determinants as explicative covariates. Disentangling survival into two different components greatly facilitates the analysis and the interpretation of the role of prognostic factors on survival patterns. For example, age plays a different role in determining, from one side, the probability of cure, and, from the other side, the life expectancy of fatal cases. The results support the hypothesis that observed survival trends are really due to a real prognostic gain for more recently diagnosed patients.

  9. Differential equation models for sharp threshold dynamics.

    PubMed

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. PMID:24184349

  10. Differential equation models for sharp threshold dynamics.

    PubMed

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations.

  11. Scalable Entity-Based Modeling of Population-Based Systems, Final LDRD Report

    SciTech Connect

    Cleary, A J; Smith, S G; Vassilevska, T K; Jefferson, D R

    2005-01-27

    The goal of this project has been to develop tools, capabilities and expertise in the modeling of complex population-based systems via scalable entity-based modeling (EBM). Our initial focal application domain has been the dynamics of large populations exposed to disease-causing agents, a topic of interest to the Department of Homeland Security in the context of bioterrorism. In the academic community, discrete simulation technology based on individual entities has shown initial success, but the technology has not been scaled to the problem sizes or computational resources of LLNL. Our developmental emphasis has been on the extension of this technology to parallel computers and maturation of the technology from an academic to a lab setting.

  12. Comprehensive, Population-Based Sensitivity Analysis of a Two-Mass Vocal Fold Model

    PubMed Central

    Robertson, Daniel; Zañartu, Matías; Cook, Douglas

    2016-01-01

    Previous vocal fold modeling studies have generally focused on generating detailed data regarding a narrow subset of possible model configurations. These studies can be interpreted to be the investigation of a single subject under one or more vocal conditions. In this study, a broad population-based sensitivity analysis is employed to examine the behavior of a virtual population of subjects and to identify trends between virtual individuals as opposed to investigating a single subject or model instance. Four different sensitivity analysis techniques were used in accomplishing this task. Influential relationships between model input parameters and model outputs were identified, and an exploration of the model’s parameter space was conducted. Results indicate that the behavior of the selected two-mass model is largely dominated by complex interactions, and that few input-output pairs have a consistent effect on the model. Results from the analysis can be used to increase the efficiency of optimization routines of reduced-order models used to investigate voice abnormalities. Results also demonstrate the types of challenges and difficulties to be expected when applying sensitivity analyses to more complex vocal fold models. Such challenges are discussed and recommendations are made for future studies. PMID:26845452

  13. A threshold model of investor psychology

    NASA Astrophysics Data System (ADS)

    Cross, Rod; Grinfeld, Michael; Lamba, Harbir; Seaman, Tim

    2005-08-01

    We introduce a class of agent-based market models founded upon simple descriptions of investor psychology. Agents are subject to various psychological tensions induced by market conditions and endowed with a minimal ‘personality’. This personality consists of a threshold level for each of the tensions being modeled, and the agent reacts whenever a tension threshold is reached. This paper considers an elementary model including just two such tensions. The first is ‘cowardice’, which is the stress caused by remaining in a minority position with respect to overall market sentiment and leads to herding-type behavior. The second is ‘inaction’, which is the increasing desire to act or re-evaluate one's investment position. There is no inductive learning by agents and they are only coupled via the global market price and overall market sentiment. Even incorporating just these two psychological tensions, important stylized facts of real market data, including fat-tails, excess kurtosis, uncorrelated price returns and clustered volatility over the timescale of a few days are reproduced. By then introducing an additional parameter that amplifies the effect of externally generated market noise during times of extreme market sentiment, long-time volatility correlations can also be recovered.

  14. Intra-individual variation of GH-dependent markers in athletes: comparison of population based and individual thresholds for detection of GH abuse in sports.

    PubMed

    Kniess, Astrid; Ziegler, Eckart; Thieme, Detlef; Müller, R Klaus

    2013-10-01

    The GH-2000 discriminant functions, using insulin-like growth factor I (IGF-I) and the N-terminal propeptide of type III procollagen (PIIINP), enabled the detection of growth hormone (GH) doping despite the broad inter-individual normal range of both peptides. The sensitivity of the discriminant function-based methodology may perhaps be further increased in future by applying individual athlete profiles. The purpose of the present study was to evaluate the intra-individual variability of IGF-I, PIIINP and the GH-2000 scores in athletes. For this purpose a total of eight blood samples were taken from each of fifty male and female elite athletes over a period of up to 18 months. The IGF-I and PIIINP levels, we found, lay predominantly within the reference range for elite athletes. The intra-individual variability for IGF-I ranged between 6 and 26%, while that for PIIINP ranged between 6 and 33%. The intra-individual variations of both parameters were higher in female than in male subjects and were found to be mostly moderate. We found that the intra-individual variations of the GH-2000 test scores, expressed as CV, ranged from 4 to 36% and were in most of the subjects markedly smaller than the inter-individual variation. Individual cut-offs for the GH-2000 scores would be lower than population based ones in most of the cases.

  15. On the two steps threshold selection for over-threshold modelling of extreme events

    NASA Astrophysics Data System (ADS)

    Bernardara, Pietro; Mazas, Franck; Weiss, Jerome; Andreewsky, Marc; Kergadallan, Xavier; Benoit, Michel; Hamm, Luc

    2013-04-01

    The estimation of the probability of occurrence of extreme events is traditionally achieved by fitting a probability distribution on a sample of extreme observations. In particular, the extreme value theory (EVT) states that values exceeding a given threshold converge through a Generalized Pareto Distribution (GPD) if the original sample is composed of independent and identically distributed values. However, the temporal series of sea and ocean variables usually show strong temporal autocorrelation. Traditionally, in order to select independent events for the following statistical analysis, the concept of a physical threshold is introduced: events that excess that threshold are defined as "extreme events". This is the so-called "Peak Over a Threshold (POT)" sampling, widely spread in the literature and currently used for engineering applications among many others. In the past, the threshold for the statistical sampling of extreme values asymptotically convergent toward GPD and the threshold for the physical selection of independent extreme events were confused, as the same threshold was used for both sampling data and to meet the hypothesis of extreme value convergence, leading to some incoherencies. In particular, if the two steps are performed simultaneously, the number of peaks over the threshold can increase but also decrease when the threshold decreases. This is logic in a physical point of view, since the definition of the sample of "extreme events" changes, but is not coherent with the statistical theory. We introduce a two-steps threshold selection for over-threshold modelling, aiming to discriminate (i) a physical threshold for the selection of extreme and independent events, and (ii) a statistical threshold for the optimization of the coherence with the hypothesis of the EVT. The former is a physical events identification procedure (also called "declustering") aiming at selecting independent extreme events. The latter is a purely statistical optimization

  16. How are population-based funding formulae for healthcare composed? A comparative analysis of seven models

    PubMed Central

    2013-01-01

    Background Population-based funding formulae act as an important means of promoting equitable health funding structures. To evaluate how policy makers in different jurisdictions construct health funding formulae and build an understanding of contextual influences underpinning formula construction we carried out a comparative analysis of key components of funding formulae across seven high-income and predominantly publically financed health systems: New Zealand, England, Scotland, the Netherlands, the state of New South Wales in Australia, the Canadian province of Ontario, and the city of Stockholm, Sweden. Methods Core components from each formula were summarised and key similarities and differences evaluated from a compositional perspective. We categorised approaches to constructing funding formulae under three main themes: identifying factors which predict differential need amongst populations; adjusting for cost factors outside of needs factors; and engaging in normative correction of allocations for ‘unmet’ need. Results We found significant congruence in the factors used to guide need and cost adjustments. However, there is considerable variation in interpretation and implementation of these factors. Conclusion Despite broadly similar frameworks, there are distinct differences in the composition of the formulae across the seven health systems. Ultimately, the development of funding formulae is a dynamic process, subject to availability of data reflecting health needs, the influence of wider socio-political objectives and health system determinants. PMID:24209410

  17. Simulation of population-based commuter exposure to NO₂ using different air pollution models.

    PubMed

    Ragettli, Martina S; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C

    2014-05-12

    We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m(-3), range: 21-61) than with a dispersion model with a lower resolution (39 ± 5 µg m(-3); range: 24-51), and a land use regression model (41 ± 5 µg m(-3); range: 24-54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas.

  18. Simulation of Population-Based Commuter Exposure to NO2 Using Different Air Pollution Models

    PubMed Central

    Ragettli, Martina S.; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E.; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C.

    2014-01-01

    We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m−3, range: 21–61) than with a dispersion model with a lower resolution (39 ± 5 µg m−3; range: 24–51), and a land use regression model (41 ± 5 µg m−3; range: 24–54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas. PMID:24823664

  19. Uncertainties in the Modelled CO2 Threshold for Antarctic Glaciation

    NASA Technical Reports Server (NTRS)

    Gasson, E.; Lunt, D. J.; DeConto, R.; Goldner, A.; Heinemann, M.; Huber, M.; LeGrande, A. N.; Pollard, D.; Sagoo, N.; Siddall, M.; Winguth, A.; Valdes, P. J.

    2014-01-01

    frequently cited atmospheric CO2 threshold for the onset of Antarctic glaciation of approximately780 parts per million by volume is based on the study of DeConto and Pollard (2003) using an ice sheet model and the GENESIS climate model. Proxy records suggest that atmospheric CO2 concentrations passed through this threshold across the Eocene-Oligocene transition approximately 34 million years. However, atmospheric CO2 concentrations may have been close to this threshold earlier than this transition, which is used by some to suggest the possibility of Antarctic ice sheets during the Eocene. Here we investigate the climate model dependency of the threshold for Antarctic glaciation by performing offline ice sheet model simulations using the climate from 7 different climate models with Eocene boundary conditions (HadCM3L, CCSM3, CESM1.0, GENESIS, FAMOUS, ECHAM5 and GISS_ER). These climate simulations are sourced from a number of independent studies, and as such the boundary conditions, which are poorly constrained during the Eocene, are not identical between simulations. The results of this study suggest that the atmospheric CO2 threshold for Antarctic glaciation is highly dependent on the climate model used and the climate model configuration. A large discrepancy between the climate model and ice sheet model grids for some simulations leads to a strong sensitivity to the lapse rate parameter.

  20. The adverse effect of spasticity on 3-month poststroke outcome using a population-based model.

    PubMed

    Belagaje, S R; Lindsell, C; Moomaw, C J; Alwell, K; Flaherty, M L; Woo, D; Dunning, K; Khatri, P; Adeoye, O; Kleindorfer, D; Broderick, J; Kissela, B

    2014-01-01

    Several devices and medications have been used to address poststroke spasticity. Yet, spasticity's impact on outcomes remains controversial. Using data from a cohort of 460 ischemic stroke patients, we previously published a validated multivariable regression model for predicting 3-month modified Rankin Score (mRS) as an indicator of functional outcome. Here, we tested whether including spasticity improved model fit and estimated the effect spasticity had on the outcome. Spasticity was defined by a positive response to the question "Did you have spasticity following your stroke?" on direct interview at 3 months from stroke onset. Patients who had expired by 90 days (n = 30) or did not have spasticity data available (n = 102) were excluded. Spasticity affected the 3-month functional status (β = 0.420, 95 CI = 0.194 to 0.645) after accounting for age, diabetes, leukoaraiosis, and retrospective NIHSS. Using spasticity as a covariable, the model's R (2) changed from 0.599 to 0.622. In our model, the presence of spasticity in the cohort was associated with a worsened 3-month mRS by an average of 0.4 after adjusting for known covariables. This significant adverse effect on functional outcomes adds predictive value beyond previously established factors. PMID:25147752

  1. The Adverse Effect of Spasticity on 3-Month Poststroke Outcome Using a Population-Based Model

    PubMed Central

    Belagaje, S. R.; Lindsell, C.; Moomaw, C. J.; Alwell, K.; Flaherty, M. L.; Woo, D.; Dunning, K.; Khatri, P.; Adeoye, O.; Kleindorfer, D.; Broderick, J.; Kissela, B.

    2014-01-01

    Several devices and medications have been used to address poststroke spasticity. Yet, spasticity's impact on outcomes remains controversial. Using data from a cohort of 460 ischemic stroke patients, we previously published a validated multivariable regression model for predicting 3-month modified Rankin Score (mRS) as an indicator of functional outcome. Here, we tested whether including spasticity improved model fit and estimated the effect spasticity had on the outcome. Spasticity was defined by a positive response to the question “Did you have spasticity following your stroke?” on direct interview at 3 months from stroke onset. Patients who had expired by 90 days (n = 30) or did not have spasticity data available (n = 102) were excluded. Spasticity affected the 3-month functional status (β = 0.420, 95 CI = 0.194 to 0.645) after accounting for age, diabetes, leukoaraiosis, and retrospective NIHSS. Using spasticity as a covariable, the model's R2 changed from 0.599 to 0.622. In our model, the presence of spasticity in the cohort was associated with a worsened 3-month mRS by an average of 0.4 after adjusting for known covariables. This significant adverse effect on functional outcomes adds predictive value beyond previously established factors. PMID:25147752

  2. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.

  3. Validation and extension of the PREMM1,2 model in a population-based cohort of colorectal cancer patients

    PubMed Central

    Balaguer, Francesc; Balmaña, Judith; Castellví-Bel, Sergi; Steyerberg, Ewout W.; Andreu, Montserrat; Llor, Xavier; Jover, Rodrigo; Syngal, Sapna; Castells, Antoni

    2008-01-01

    Summary Background and aims Early recognition of patients at risk for Lynch syndrome is critical but often difficult. Recently, a predictive algorithm -the PREMM1,2 model- has been developed to quantify the risk of carrying a germline mutation in the mismatch repair (MMR) genes, MLH1 and MSH2. However, its performance in an unselected, population-based colorectal cancer population as well as its performance in combination with tumor MMR testing are unknown. Methods We included all colorectal cancer cases from the EPICOLON study, a prospective, multicenter, population-based cohort (n=1,222). All patients underwent tumor microsatellite instability analysis and immunostaining for MLH1 and MSH2, and those with MMR deficiency (n=91) underwent tumor BRAF V600E mutation analysis and MLH1/MSH2 germline testing. Results The PREMM1,2 model with a ≥5% cut-off had a sensitivity, specificity and positive predictive value (PPV) of 100%, 68% and 2%, respectively. The use of a higher PREMM1,2 cut-off provided a higher specificity and PPV, at expense of a lower sensitivity. The combination of a ≥5% cut-off with tumor MMR testing maintained 100% sensitivity with an increased specificity (97%) and PPV (21%). The PPV of a PREMM1,2 score ≥20% alone (16%) approached the PPV obtained with PREMM1,2 score ≥5% combined with tumor MMR testing. In addition, a PREMM1,2 score of <5% was associated with a high likelihood of a BRAF V600E mutation. Conclusions The PREMM1,2 model is useful to identify MLH1/MSH2 mutation carriers among unselected colorectal cancer patients. Quantitative assessment of the genetic risk might be useful to decide on subsequent tumor MMR and germline testing. PMID:18061181

  4. Mathematical model for adaptive evolution of populations based on a complex domain

    PubMed Central

    Ibrahim, Rabha W.; Ahmad, M.Z.; Al-Janaby, Hiba F.

    2015-01-01

    A mutation is ultimately essential for adaptive evolution in all populations. It arises all the time, but is mostly fixed by enzymes. Further, most do consider that the evolution mechanism is by a natural assortment of variations in organisms in line for random variations in their DNA, and the suggestions for this are overwhelming. The altering of the construction of a gene, causing a different form that may be communicated to succeeding generations, produced by the modification of single base units in DNA, or the deletion, insertion, or rearrangement of larger units of chromosomes or genes. This altering is called a mutation. In this paper, a mathematical model is introduced to this reality. The model describes the time and space for the evolution. The tool is based on a complex domain for the space. We show that the evolution is distributed with the hypergeometric function. The Boundedness of the evolution is imposed by utilizing the Koebe function. PMID:26858564

  5. Octave-Band Thresholds for Modeled Reverberant Fields

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).

  6. Associations between five-factor model traits and perceived job strain: a population-based study.

    PubMed

    Törnroos, Maria; Hintsanen, Mirka; Hintsa, Taina; Jokela, Markus; Pulkki-Råback, Laura; Hutri-Kähönen, Nina; Keltikangas-Järvinen, Liisa

    2013-10-01

    This study examined the association between Five-Factor Model personality traits and perceived job strain. The sample consisted of 758 women and 614 men (aged 30-45 years in 2007) participating in the Young Finns study. Personality was assessed with the Neuroticism, Extraversion, Openness, Five-Factor Inventory (NEO-FFI) questionnaire and work stress according to Karasek's demand-control model of job strain. The associations between personality traits and job strain and its components were measured by linear regression analyses where the traits were first entered individually and then simultaneously. The results for the associations between individually entered personality traits showed that high neuroticism, low extraversion, low openness, low conscientiousness, and low agreeableness were associated with high job strain. High neuroticism, high openness, and low agreeableness were related to high demands, whereas high neuroticism, low extraversion, low openness, low conscientiousness, and low agreeableness were associated with low control. In the analyses for the simultaneously entered traits, high neuroticism, low openness, and low conscientiousness were associated with high job strain. In addition, high neuroticism was related to high demands and low control, whereas low extraversion was related to low demands and low control. Low openness and low conscientiousness were also related to low control. This study suggests that personality is related to perceived job strain. Perceptions of work stressors and decision latitude are not only indicators of structural aspects of work but also indicate that there are individual differences in how individuals experience their work environment.

  7. Analysis of amyotrophic lateral sclerosis as a multistep process: a population-based modelling study

    PubMed Central

    Al-Chalabi, Ammar; Calvo, Andrea; Chio, Adriano; Colville, Shuna; Ellis, Cathy M; Hardiman, Orla; Heverin, Mark; Howard, Robin S; Huisman, Mark H B; Keren, Noa; Leigh, P Nigel; Mazzini, Letizia; Mora, Gabriele; Orrell, Richard W; Rooney, James; Scott, Kirsten M; Scotton, William J; Seelen, Meinie; Shaw, Christopher E; Sidle, Katie S; Swingler, Robert; Tsuda, Miho; Veldink, Jan H; Visser, Anne E; van den Berg, Leonard H; Pearce, Neil

    2014-01-01

    Summary Background Amyotrophic lateral sclerosis shares characteristics with some cancers, such as onset being more common in later life, progression usually being rapid, the disease affecting a particular cell type, and showing complex inheritance. We used a model originally applied to cancer epidemiology to investigate the hypothesis that amyotrophic lateral sclerosis is a multistep process. Methods We generated incidence data by age and sex from amyotrophic lateral sclerosis population registers in Ireland (registration dates 1995–2012), the Netherlands (2006–12), Italy (1995–2004), Scotland (1989–98), and England (2002–09), and calculated age and sex-adjusted incidences for each register. We regressed the log of age-specific incidence against the log of age with least squares regression. We did the analyses within each register, and also did a combined analysis, adjusting for register. Findings We identified 6274 cases of amyotrophic lateral sclerosis from a catchment population of about 34 million people. We noted a linear relationship between log incidence and log age in all five registers: England r2=0·95, Ireland r2=0·99, Italy r2=0·95, the Netherlands r2=0·99, and Scotland r2=0·97; overall r2=0·99. All five registers gave similar estimates of the linear slope ranging from 4·5 to 5·1, with overlapping confidence intervals. The combination of all five registers gave an overall slope of 4·8 (95% CI 4·5–5·0), with similar estimates for men (4·6, 4·3–4·9) and women (5·0, 4·5–5·5). Interpretation A linear relationship between the log incidence and log age of onset of amyotrophic lateral sclerosis is consistent with a multistage model of disease. The slope estimate suggests that amyotrophic lateral sclerosis is a six-step process. Identification of these steps could lead to preventive and therapeutic avenues. Funding UK Medical Research Council; UK Economic and Social Research Council; Ireland Health Research Board; The

  8. A population-based model to describe geometrical uncertainties in radiotherapy: applied to prostate cases

    NASA Astrophysics Data System (ADS)

    Budiarto, E.; Keijzer, M.; Storchi, P. R.; Hoogeman, M. S.; Bondar, L.; Mutanga, T. F.; de Boer, H. C. J.; Heemink, A. W.

    2011-02-01

    Local motions and deformations of organs between treatment fractions introduce geometrical uncertainties into radiotherapy. These uncertainties are generally taken into account in the treatment planning by enlarging the radiation target by a margin around the clinical target volume. However, a practical method to fully include these uncertainties is still lacking. This paper proposes a model based on the principal component analysis to describe the patient-specific local probability distributions of voxel motions so that the average values and variances of the dose distribution can be calculated and fully used later in inverse treatment planning. As usually only a very limited number of data for new patients is available; in this paper the analysis is extended to use population data. A basic assumption (which is justified retrospectively in this paper) is that general movements and deformations of a specific organ are similar despite variations in the shapes of the organ over the population. A proof of principle of the method for deformations of the prostate and the seminal vesicles is presented.

  9. Setting conservation management thresholds using a novel participatory modeling approach.

    PubMed

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future.

  10. Setting conservation management thresholds using a novel participatory modeling approach.

    PubMed

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. PMID:26040608

  11. Modeling threshold detection and search for point and extended sources

    NASA Astrophysics Data System (ADS)

    Friedman, Melvin

    2016-05-01

    This paper deals with three separate topics. 1)The Berek extended object threshold detection model is described, calibrated against a portion of Blackwell's 1946 naked eye threshold detection data for extended objects against an unstructured background, and then the remainder of Blackwell's data is used to verify and validate the model. A range equation is derived from Berek's model which allows threshold detection range to be predicted for extended to point objects against an un-cluttered background as a function of target size and adapting luminance levels. The range equation is then used to model threshold detection of stationary reflective and self-luminous targets against an uncluttered background. 2) There is uncertainty whether Travnikova's search data for point source detection against an un-cluttered background is described by Rayleigh or exponential distributions. A model which explains the Rayleigh distribution for barely perceptible objects and the exponential distribution for brighter objects is given. 3) A technique is presented which allows a specific observer's target acquisition capability to be characterized. Then a model is presented which describes how individual target acquisition probability grows when a specific observer or combination of specific observers search for targets. Applications for the three topics are discussed.

  12. Inflection, canards and excitability threshold in neuronal models.

    PubMed

    Desroches, M; Krupa, M; Rodrigues, S

    2013-10-01

    A technique is presented, based on the differential geometry of planar curves, to evaluate the excitability threshold of neuronal models. The aim is to determine regions of the phase plane where solutions to the model equations have zero local curvature, thereby defining a zero-curvature (inflection) set that discerns between sub-threshold and spiking electrical activity. This transition can arise through a Hopf bifurcation, via the so-called canard explosion that happens in an exponentially small parameter variation, and this is typical for a large class of planar neuronal models (FitzHugh-Nagumo, reduced Hodgkin-Huxley), namely, type II neurons (resonators). This transition can also correspond to the crossing of the stable manifold of a saddle equilibrium, in the case of type I neurons (integrators). We compute inflection sets and study how well they approximate the excitability threshold of these neuron models, that is, both in the canard and in the non-canard regime, using tools from invariant manifold theory and singularity theory. With the latter, we investigate the topological changes that inflection sets undergo upon parameter variation. Finally, we show that the concept of inflection set gives a good approximation of the threshold in both the so-called resonator and integrator neuronal cases. PMID:22945512

  13. Describing interactions in dystocia scores with a threshold model.

    PubMed

    Quaas, R L; Zhao, Y; Pollak, E J

    1988-02-01

    Field data on calving difficulty scores provided by the American Simmental Association were subjected to two methods of analysis: ordinary least-squares analysis and maximum likelihood with an assumed threshold model. In each analysis, the model included the interaction of sex of calf X age of dam. This interaction was readily apparent in the data (observed scale): within the youngest dams 58% of the heifer calves and 37% of the bull calves were born unassisted vs 96% and 92%, respectively, in the oldest dams. The objective was to determine if this interaction would be greatly reduced or would disappear on the underlying scale of a threshold model. The least-squares estimate of the sex difference was greatest within the youngest age-of-dam group (18 to 24 mo) and steadily declined with increasing age of dam, approaching zero for dams 6 yr and older. In contrast, the estimates of the sex difference from the threshold analysis were remarkably similar across ages of dam. It was concluded that observed interactions in calving ease data could be adequately described by a threshold model in which the effects of age of dam and sex of calf act additively on the underlying variable.

  14. Modeling the Interactions Between Multiple Crack Closure Mechanisms at Threshold

    NASA Technical Reports Server (NTRS)

    Newman, John A.; Riddell, William T.; Piascik, Robert S.

    2003-01-01

    A fatigue crack closure model is developed that includes interactions between the three closure mechanisms most likely to occur at threshold; plasticity, roughness, and oxide. This model, herein referred to as the CROP model (for Closure, Roughness, Oxide, and Plasticity), also includes the effects of out-of plane cracking and multi-axial loading. These features make the CROP closure model uniquely suited for, but not limited to, threshold applications. Rough cracks are idealized here as two-dimensional sawtooths, whose geometry induces mixed-mode crack- tip stresses. Continuum mechanics and crack-tip dislocation concepts are combined to relate crack face displacements to crack-tip loads. Geometric criteria are used to determine closure loads from crack-face displacements. Finite element results, used to verify model predictions, provide critical information about the locations where crack closure occurs.

  15. Effect of threshold disorder on the quorum percolation model

    NASA Astrophysics Data System (ADS)

    Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel

    2016-07-01

    We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.

  16. Effect of threshold disorder on the quorum percolation model.

    PubMed

    Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel

    2016-07-01

    We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities. PMID:27575157

  17. Empirical assessment of a threshold model for sylvatic plague.

    PubMed

    Davis, S; Leirs, H; Viljugrein, H; Stenseth, N Chr; De Bruyn, L; Klassovskiy, N; Ageyev, V; Begon, M

    2007-08-22

    Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model using additional data from the same archives. Throughout the focus, population levels above the threshold were a necessary condition for an epizootic to occur. However, there were large numbers of occasions when an epizootic was not observed even though great gerbils were, and had been, abundant. We examine six hypotheses that could explain the resulting false positive predictions, namely (i) including end-of-outbreak data erroneously lowers the estimated threshold, (ii) too few gerbils were tested, (iii) plague becomes locally extinct, (iv) the abundance of fleas was too low, (v) the climate was unfavourable, and (vi) a high proportion of gerbils were resistant. Of these, separate thresholds, fleas and climate received some support but accounted for few false positives and can be disregarded as serious omissions from the model. Small sample size and local extinction received strong support and can account for most of the false positives. Host resistance received no support here but should be subject to more direct experimental testing.

  18. Predictors of the nicotine reinforcement threshold, compensation, and elasticity of demand in a rodent model of nicotine reduction policy*

    PubMed Central

    Grebenstein, Patricia E.; Burroughs, Danielle; Roiko, Samuel A.; Pentel, Paul R.; LeSage, Mark G.

    2015-01-01

    Background The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. Methods The present study examined these issues in a rodent nicotine self- administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Results Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. Conclusions These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. PMID:25891231

  19. Discrete threshold versus continuous strength models of perceptual recognition.

    PubMed

    Paap, K R; Chun, E; Vonnahme, P

    1999-12-01

    Two experiments were designed to test discrete-threshold models of letter and word recognition against models that assume that decision criteria are applied to measures of continuous strength. Although our goal is to adjudicate this matter with respect to broad classes of models, some of the specific predictions for discrete-threshold are generated from Grainger and Jacobs' (1994) Dual-Readout Model (DROM) and some of the predictions for continuous strength are generated from a revised version of the Activation-Verification Model (Paap, Newsome, McDonald, & Schvaneveldt, 1982). Experiment 1 uses a two-alternative forced-choice task that is followed by an assessment of confidence and then a whole report if a word is recognized. Factors are manipulated to assess the presence or magnitude of a neighbourhood-frequency effect, a lexical-bias effect, a word-superiority effect, and a pseudoword advantage. Several discrepancies between DROM's predictions and the obtained data are noted. Both types of models were also used to predict the distribution of responses across the levels of confidence for each individual participant. The predictions based on continuous strength were superior. Experiment 2 used a same-different task and confidence ratings to enable the generation of receiver operating characteristics (ROCs). The shapes of the ROCs are more consistent with the continuous strength assumption than with a discrete threshold. PMID:10646200

  20. The interplay between cooperativity and diversity in model threshold ensembles.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafe, Salvador

    2014-10-01

    The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516

  1. The interplay between cooperativity and diversity in model threshold ensembles.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafe, Salvador

    2014-10-01

    The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks.

  2. The interplay between cooperativity and diversity in model threshold ensembles

    PubMed Central

    Cervera, Javier; Manzanares, José A.; Mafe, Salvador

    2014-01-01

    The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516

  3. Selection Strategies for Social Influence in the Threshold Model

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  4. A structured threshold model for mountain pine beetle outbreak.

    PubMed

    Lewis, Mark A; Nelson, William; Xu, Cailin

    2010-04-01

    A vigor-structured model for mountain pine beetle outbreak dynamics within a forest stand is proposed and analyzed. This model explicitly tracks the changing vigor structure in the stand. All model parameters, other than beetle vigor preference, were determined by fitting model components to empirical data. An abrupt threshold for tree mortality to beetle densities allows for model simplification. Based on initial beetle density, model outcomes vary from decimation of the entire stand in a single year, to inability of the beetles to infect any trees. An intermediate outcome involves an initial infestation which subsequently dies out before the entire stand is killed. A model extension is proposed for dynamics of beetle aggregation. This involves a stochastic formulation.

  5. Semiautomatic bladder segmentation on CBCT using a population-based model for multiple-plan ART of bladder cancer

    NASA Astrophysics Data System (ADS)

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-12-01

    The aim of this study is to develop a novel semiautomatic bladder segmentation approach for selecting the appropriate plan from the library of plans for a multiple-plan adaptive radiotherapy (ART) procedure. A population-based statistical bladder model was first built from a training data set (95 bladder contours from 8 patients). This model was then used as constraint to segment the bladder in an independent validation data set (233 CBCT scans from the remaining 22 patients). All 3D bladder contours were converted into parametric surface representations using spherical harmonic expansion. Principal component analysis (PCA) was applied in the spherical harmonic-based shape parameter space to calculate the major variation of bladder shapes. The number of dominating PCA modes was chosen such that 95% of the total shape variation of the training data set was described. The automatic segmentation started from the bladder contour of the planning CT of each patient, which was modified by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to best fit the bladder boundary in the localization CBCT image. A cost function was defined to measure the goodness of fit of the segmentation on the localization CBCT image. The segmentation was obtained by minimizing this cost function using a simplex optimizer. After automatic segmentation, a fast manual correction method was provided to correct those bladders (parts) that were poorly segmented. Volume- and distance-based metrics and the accuracy of plan selection from multiple plans were evaluated to quantify the performance of the automatic and semiautomatic segmentation methods. For the training data set, only seven PCA modes were needed to represent 95% of the bladder shape variation. The mean CI overlap and residual error (SD) of automatic bladder segmentation over all of the validation data were 70.5% and 0.39 cm, respectively. The agreement of plan

  6. In vitro model that approximates retinal damage threshold trends.

    PubMed

    Denton, Michael L; Foltz, Michael S; Schuster, Kurt J; Noojin, Gary D; Estlack, Larry E; Thomas, Robert J

    2008-01-01

    Without effective in vitro damage models, advances in our understanding of the physics and biology of laser-tissue interaction would be hampered due to cost and ethical limitations placed on the use of nonhuman primates. We extend our characterization of laser-induced cell death in an existing in vitro retinal model to include damage thresholds at 514 and 413 nm. The new data, when combined with data previously reported for 532 and 458 nm exposures, provide a sufficiently broad range of wavelengths and exposure durations (0.1 to 100 s) to make comparisons with minimum visible lesion (in vivo) data in the literature. Based on similarities between in vivo and in vitro action spectra and temporal action profiles, the cell culture model is found to respond to laser irradiation in a fundamentally similar fashion as the retina of the rhesus animal model. We further show that this response depends on the amount of intracellular melanin pigmentation.

  7. Diagnosis of Parkinson’s disease on the basis of clinical–genetic classification: a population-based modelling study

    PubMed Central

    Nalls, Mike A.; McLean, Cory Y.; Rick, Jacqueline; Eberly, Shirley; Hutten, Samantha J.; Gwinn, Katrina; Sutherland, Margaret; Martinez, Maria; Heutink, Peter; Williams, Nigel; Hardy, John; Gasser, Thomas; Brice, Alexis; Price, T. Ryan; Nicolas, Aude; Keller, Margaux F.; Molony, Cliona; Gibbs, J. Raphael; Chen-Plotkin, Alice; Suh, Eunran; Letson, Christopher; Fiandaca, Massimo S.; Mapstone, Mark; Federoff, Howard J.; Noyce, Alastair J; Morris, Huw; Van Deerlin, Vivianna M.; Weintraub, Daniel; Zabetian, Cyrus; Hernandez, Dena G.; Lesage, Suzanne; Mullins, Meghan; Conley, Emily Drabant; Northover, Carrie; Frasier, Mark; Marek, Ken; Day-Williams, Aaron G.; Stone, David J.; Ioannidis, John P. A.; Singleton, Andrew B.

    2015-01-01

    Background Accurate diagnosis and early detection of complex disease has the potential to be of enormous benefit to clinical trialists, patients, and researchers alike. We sought to create a non-invasive, low-cost, and accurate classification model for diagnosing Parkinson’s disease risk to serve as a basis for future disease prediction studies in prospective longitudinal cohorts. Methods We developed a simple disease classifying model within 367 patients with Parkinson’s disease and phenotypically typical imaging data and 165 controls without neurological disease of the Parkinson’s Progression Marker Initiative (PPMI) study. Olfactory function, genetic risk, family history of PD, age and gender were algorithmically selected as significant contributors to our classifying model. This model was developed using the PPMI study then tested in 825 patients with Parkinson’s disease and 261 controls from five independent studies with varying recruitment strategies and designs including the Parkinson’s Disease Biomarkers Program (PDBP), Parkinson’s Associated Risk Study (PARS), 23andMe, Longitudinal and Biomarker Study in PD (LABS-PD), and Morris K. Udall Parkinson’s Disease Research Center of Excellence (Penn-Udall). Findings Our initial model correctly distinguished patients with Parkinson’s disease from controls at an area under the curve (AUC) of 0.923 (95% CI = 0.900 – 0.946) with high sensitivity (0.834, 95% CI = 0.711 – 0.883) and specificity (0.903, 95% CI = 0.824 – 0.946) in PPMI at its optimal AUC threshold (0.655). The model is also well-calibrated with all Hosmer-Lemeshow simulations suggesting that when parsed into random subgroups, the actual data mirrors that of the larger expected data, demonstrating that our model is robust and fits well. Likewise external validation shows excellent classification of PD with AUCs of 0.894 in PDBP, 0.998 in PARS, 0.955 in 23andMe, 0.929 in LABS-PD, and 0.939 in Penn-Udall. Additionally, when our model

  8. Non-smooth plant disease models with economic thresholds.

    PubMed

    Zhao, Tingting; Xiao, Yanni; Smith, Robert J

    2013-01-01

    In order to control plant diseases and eventually maintain the number of infected plants below an economic threshold, a specific management strategy called the threshold policy is proposed, resulting in Filippov systems. These are a class of piecewise smooth systems of differential equations with a discontinuous right-hand side. The aim of this work is to investigate the global dynamic behavior including sliding dynamics of one Filippov plant disease model with cultural control strategy. We examine a Lotka-Volterra Filippov plant disease model with proportional planting rate, which is globally studied in terms of five types of equilibria. For one type of equilibrium, the global structure is discussed by the iterative equations for initial numbers of plants. For the other four types of equilibria, the bounded global attractor of each type is obtained by constructing appropriate Lyapunov functions. The ideas of constructing Lyapunov functions for Filippov systems, the methods of analyzing such systems and the main results presented here provide scientific support for completing control regimens on plant diseases in integrated disease management.

  9. Model to Estimate Threshold Mechanical Stability of Lower Lateral Cartilage

    PubMed Central

    Kim, James Hakjune; Hamamoto, Ashley; Kiyohara, Nicole; Wong, Brian J. F.

    2015-01-01

    IMPORTANCE In rhinoplasty, techniques used to alter the shape of the nasal tip often compromise the structural stability of the cartilage framework in the nose. Determining the minimum threshold level of cartilage stiffness required to maintain long-term structural stability is a critical aspect in performing these surgical maneuvers. OBJECTIVE To quantify the minimum threshold mechanical stability (elastic modulus) of lower lateral cartilage (LLC) according to expert opinion. METHODS Five anatomically correct LLC phantoms were made from urethane via a 3-dimensional computer modeling and injection molding process. All 5 had identical geometry but varied in stiffness along the intermediate crural region (0.63–30.6 MPa). DESIGN, SETTING, AND PARTICIPANTS A focus group of experienced rhinoplasty surgeons (n = 33) was surveyed at a regional professional meeting on October 25, 2013. Each survey participant was presented the 5 phantoms in a random order and asked to arrange the phantoms in order of increasing stiffness based on their sense of touch. Then, they were asked to select a single phantom out of the set that they believed to have the minimum acceptable mechanical stability for LLC to maintain proper form and function. MAIN OUTCOMES AND MEASURES A binary logistic regression was performed to calculate the probability of mechanical acceptability as a function of the elastic modulus of the LLC based on survey data. A Hosmer-Lemeshow test was performed to measure the goodness of fit between the logistic regression and survey data. The minimum threshold mechanical stability for LLC was taken at a 50% acceptability rating. RESULTS Phantom 4 was selected most frequently by the participants as having the minimum acceptable stiffness for LLC intermediate care. The minimum threshold mechanical stability for LLC was determined to be 3.65 MPa. The Hosmer-Lemeshow test revealed good fit between the logistic regression and survey data ( χ32=0.92 , P = .82). CONCLUSIONS AND

  10. Terrestrial Microgravity Model and Threshold Gravity Simulation using Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successfully simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars. The paper will discuss experiments md modeling work to date in support of this project.

  11. A random graph model of density thresholds in swarming cells.

    PubMed

    Jena, Siddhartha G

    2016-03-01

    Swarming behaviour is a type of bacterial motility that has been found to be dependent on reaching a local density threshold of cells. With this in mind, the process through which cell-to-cell interactions develop and how an assembly of cells reaches collective motility becomes increasingly important to understand. Additionally, populations of cells and organisms have been modelled through graphs to draw insightful conclusions about population dynamics on a spatial level. In the present study, we make use of analogous random graph structures to model the formation of large chain subgraphs, representing interactions between multiple cells, as a random graph Markov process. Using numerical simulations and analytical results on how quickly paths of certain lengths are reached in a random graph process, metrics for intercellular interaction dynamics at the swarm layer that may be experimentally evaluated are proposed. PMID:26893102

  12. Stylized facts from a threshold-based heterogeneous agent model

    NASA Astrophysics Data System (ADS)

    Cross, R.; Grinfeld, M.; Lamba, H.; Seaman, T.

    2007-05-01

    A class of heterogeneous agent models is investigated where investors switch trading position whenever their motivation to do so exceeds some critical threshold. These motivations can be psychological in nature or reflect behaviour suggested by the efficient market hypothesis (EMH). By introducing different propensities into a baseline model that displays EMH behaviour, one can attempt to isolate their effects upon the market dynamics. The simulation results indicate that the introduction of a herding propensity results in excess kurtosis and power-law decay consistent with those observed in actual return distributions, but not in significant long-term volatility correlations. Possible alternatives for introducing such long-term volatility correlations are then identified and discussed.

  13. Terrestrial Microgravity Model and Threshold Gravity Simulation sing Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successiblly simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars.

  14. A threshold model of content knowledge transfer for socioscientific argumentation

    NASA Astrophysics Data System (ADS)

    Sadler, Troy D.; Fowler, Samantha R.

    2006-11-01

    This study explores how individuals make use of scientific content knowledge for socioscientific argumentation. More specifically, this mixed-methods study investigates how learners apply genetics content knowledge as they justify claims relative to genetic engineering. Interviews are conducted with 45 participants, representing three distinct groups: high school students with variable genetics knowledge, college nonscience majors with little genetics knowledge, and college science majors with advanced genetics knowledge. During the interviews, participants advance positions concerning three scenarios dealing with gene therapy and cloning. Arguments are assessed in terms of the number of justifications offered as well as justification quality, based on a five-point rubric. Multivariate analysis of variance results indicate that college science majors outperformed the other groups in terms of justification quality and frequency. Argumentation does not differ among nonscience majors or high school students. Follow-up qualitative analyses of interview responses suggest that all three groups tend to focus on similar, sociomoral themes as they negotiate socially complex, genetic engineering issues, but that the science majors frequently reference specific science content knowledge in the justification of their claims. Results support the Threshold Model of Content Knowledge Transfer, which proposes two knowledge thresholds around which argumentation quality can reasonably be expected to increase. Research and educational implications of these findings are discussed.

  15. Smooth-Threshold Multivariate Genetic Prediction with Unbiased Model Selection.

    PubMed

    Ueki, Masao; Tamiya, Gen

    2016-04-01

    We develop a new genetic prediction method, smooth-threshold multivariate genetic prediction, using single nucleotide polymorphisms (SNPs) data in genome-wide association studies (GWASs). Our method consists of two stages. At the first stage, unlike the usual discontinuous SNP screening as used in the gene score method, our method continuously screens SNPs based on the output from standard univariate analysis for marginal association of each SNP. At the second stage, the predictive model is built by a generalized ridge regression simultaneously using the screened SNPs with SNP weight determined by the strength of marginal association. Continuous SNP screening by the smooth thresholding not only makes prediction stable but also leads to a closed form expression of generalized degrees of freedom (GDF). The GDF leads to the Stein's unbiased risk estimation (SURE), which enables data-dependent choice of optimal SNP screening cutoff without using cross-validation. Our method is very rapid because computationally expensive genome-wide scan is required only once in contrast to the penalized regression methods including lasso and elastic net. Simulation studies that mimic real GWAS data with quantitative and binary traits demonstrate that the proposed method outperforms the gene score method and genomic best linear unbiased prediction (GBLUP), and also shows comparable or sometimes improved performance with the lasso and elastic net being known to have good predictive ability but with heavy computational cost. Application to whole-genome sequencing (WGS) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) exhibits that the proposed method shows higher predictive power than the gene score and GBLUP methods.

  16. No-Impact Threshold Values for NRAP's Reduced Order Models

    SciTech Connect

    Last, George V.; Murray, Christopher J.; Brown, Christopher F.; Jordan, Preston D.; Sharma, Maneesh

    2013-02-01

    The purpose of this study was to develop methodologies for establishing baseline datasets and statistical protocols for determining statistically significant changes between background concentrations and predicted concentrations that would be used to represent a contamination plume in the Gen II models being developed by NRAP’s Groundwater Protection team. The initial effort examined selected portions of two aquifer systems; the urban shallow-unconfined aquifer system of the Edwards-Trinity Aquifer System (being used to develop the ROM for carbon-rock aquifers, and the a portion of the High Plains Aquifer (an unconsolidated and semi-consolidated sand and gravel aquifer, being used to development the ROM for sandstone aquifers). Threshold values were determined for Cd, Pb, As, pH, and TDS that could be used to identify contamination due to predicted impacts from carbon sequestration storage reservoirs, based on recommendations found in the EPA’s ''Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities'' (US Environmental Protection Agency 2009). Results from this effort can be used to inform a ''no change'' scenario with respect to groundwater impacts, rather than the use of an MCL that could be significantly higher than existing concentrations in the aquifer.

  17. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants

    PubMed Central

    Venail, Frederic; Mura, Thibault; Akkari, Mohamed; Mathiolon, Caroline; Menjot de Champfleur, Sophie; Piron, Jean Pierre; Sicard, Marielle; Sterkers-Artieres, Françoise; Mondain, Michel; Uziel, Alain

    2015-01-01

    The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement), electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device). The electrical response, measured using auto-NRT (neural responses telemetry) algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = −0.11 ± 0.02, P < 0.01), the scalar placement of the electrodes (β = −8.50 ± 1.97, P < 0.01), and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF). Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas. PMID:26236725

  18. Effects of mixing in threshold models of social behavior

    NASA Astrophysics Data System (ADS)

    Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  19. The application of cure models in the presence of competing risks: a tool for improved risk communication in population-based cancer patient survival.

    PubMed

    Eloranta, Sandra; Lambert, Paul C; Andersson, Therese M-L; Björkholm, Magnus; Dickman, Paul W

    2014-09-01

    Quantifying cancer patient survival from the perspective of cure is clinically relevant. However, most cure models estimate cure assuming no competing causes of death. We use a relative survival framework to demonstrate how flexible parametric cure models can be used in combination with competing-risks theory to incorporate noncancer deaths. Under a model that incorporates statistical cure, we present the probabilities that cancer patients (1) have died from their cancer, (2) have died from other causes, (3) will eventually die from their cancer, or (4) will eventually die from other causes, all as a function of time since diagnosis. We further demonstrate how conditional probabilities can be used to update the prognosis among survivors (eg, at 1 or 5 years after diagnosis) by summarizing the proportion of patients who will not die from their cancer. The proposed method is applied to Swedish population-based data for persons diagnosed with melanoma, colon cancer, or acute myeloid leukemia between 1973 and 2007.

  20. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  1. Fiber bundle model with highly disordered breaking thresholds.

    PubMed

    Roy, Chandreyee; Kundu, Sumanta; Manna, S S

    2015-03-01

    We present a study of the fiber bundle model using equal load-sharing dynamics where the breaking thresholds of the fibers are drawn randomly from a power-law distribution of the form p(b)∼b-1 in the range 10-β to 10β. Tuning the value of β continuously over a wide range, the critical behavior of the fiber bundle has been studied both analytically as well as numerically. Our results are: (i) The critical load σc(β,N) for the bundle of size N approaches its asymptotic value σc(β) as σc(β,N)=σc(β)+AN-1/ν(β), where σc(β) has been obtained analytically as σc(β)=10β/(2βeln10) for β≥βu=1/(2ln10), and for β<βu the weakest fiber failure leads to the catastrophic breakdown of the entire fiber bundle, similar to brittle materials, leading to σ_{c}(β)=10-β; (ii) the fraction of broken fibers right before the complete breakdown of the bundle has the form 1-1/(2βln10); (iii) the distribution D(Δ) of the avalanches of size Δ follows a power-law D(Δ)∼Δ-ξ with ξ=5/2 for Δ≫Δc(β) and ξ=3/2 for Δ≪Δc(β), where the crossover avalanche size Δc(β)=2/(1-e10-2β)2. PMID:25871050

  2. Bayesian random threshold estimation in a Cox proportional hazards cure model.

    PubMed

    Zhao, Lili; Feng, Dai; Bellile, Emily L; Taylor, Jeremy M G

    2014-02-20

    In this paper, we develop a Bayesian approach to estimate a Cox proportional hazards model that allows a threshold in the regression coefficient, when some fraction of subjects are not susceptible to the event of interest. A data augmentation scheme with latent binary cure indicators is adopted to simplify the Markov chain Monte Carlo implementation. Given the binary cure indicators, the Cox cure model reduces to a standard Cox model and a logistic regression model. Furthermore, the threshold detection problem reverts to a threshold problem in a regular Cox model. The baseline cumulative hazard for the Cox model is formulated non-parametrically using counting processes with a gamma process prior. Simulation studies demonstrate that the method provides accurate point and interval estimates. Application to a data set of oropharynx cancer patients suggests a significant threshold in age at diagnosis such that the effect of gender on disease-specific survival changes after the threshold.

  3. The hormetic dose-response model is more common than the threshold model in toxicology.

    PubMed

    Calabrese, Edward J; Baldwin, Linda A

    2003-02-01

    The threshold dose-response model is widely viewed as the most dominant model in toxicology. The present study was designed to test the validity of the threshold model by assessing the responses of doses below the toxicological NOAEL (no observed adverse effect level) in relationship to the control response (i.e., unexposed group). Nearly 1,800 doses below the NOAEL, from 664 dose-response relationships derived from a previously published database that satisfied a priori entry criteria, were evaluated. While the threshold model predicts a 1:1 ratio of responses "greater than" to "less than" the control response (i.e., a random distribution), a 2.5:1 ratio (i.e., 1171:464) was observed, reflecting 31% more responses above the control value than expected (p < 0.0001). The mean response (calculated as % control response) of doses below the NOAEL was 115.0% +/- 1.5 standard error of the mean (SEM). These findings challenge the long-standing belief in the primacy of the threshold model in toxicology (and other areas of biology involving dose-response relationships) and provide strong support for the hormetic-like biphasic dose-response model characterized by a low-dose stimulation and a high-dose inhibition. These findings may affect numerous aspects of toxicological and biological/biomedical research related to dose-response relationships, including study design, risk assessment, as well as chemotherapeutic strategies.

  4. A model measuring therapeutic inertia and the associated factors among diabetes patients: A nationwide population-based study in Taiwan.

    PubMed

    Huang, Li-Ying; Shau, Wen-Yi; Yeh, Hseng-Long; Chen, Tsung-Tai; Hsieh, Jun Yi; Su, Syi; Lai, Mei-Shu

    2015-01-01

    This article presents an analysis conducted on the patterns related to therapeutic inertia with the aim of uncovering how variables at the patient level and the healthcare provider level influence the intensification of therapy when it is clinically indicated. A cohort study was conducted on 899,135 HbA1c results from 168,876 adult diabetes patients with poorly controlled HbA1c levels. HbA1c results were used to identify variations in the prescription of hypoglycemic drugs. Logistic regression and hierarchical linear models (HLMs) were used to determine how differences among healthcare providers and patient characteristics influence therapeutic inertia. We estimated that 38.5% of the patients in this study were subject to therapeutic inertia. The odds ratio of cardiologists choosing to intensify therapy was 0.708 times that of endocrinologists. Furthermore, patients in medical centers were shown to be 1.077 times more likely to be prescribed intensified treatment than patients in primary clinics. The HLMs presented results similar to those of the logistic model. Overall, we determined that 88.92% of the variation in the application of intensified treatment was at the within-physician level. Reducing therapeutic inertia will likely require educational initiatives aimed at ensuring adherence to clinical practice guidelines in the care of diabetes patients.

  5. Budget Impact Analysis of Switching to Digital Mammography in a Population-Based Breast Cancer Screening Program: A Discrete Event Simulation Model

    PubMed Central

    Comas, Mercè; Arrospide, Arantzazu; Mar, Javier; Sala, Maria; Vilaprinyó, Ester; Hernández, Cristina; Cots, Francesc; Martínez, Juan; Castells, Xavier

    2014-01-01

    Objective To assess the budgetary impact of switching from screen-film mammography to full-field digital mammography in a population-based breast cancer screening program. Methods A discrete-event simulation model was built to reproduce the breast cancer screening process (biennial mammographic screening of women aged 50 to 69 years) combined with the natural history of breast cancer. The simulation started with 100,000 women and, during a 20-year simulation horizon, new women were dynamically entered according to the aging of the Spanish population. Data on screening were obtained from Spanish breast cancer screening programs. Data on the natural history of breast cancer were based on US data adapted to our population. A budget impact analysis comparing digital with screen-film screening mammography was performed in a sample of 2,000 simulation runs. A sensitivity analysis was performed for crucial screening-related parameters. Distinct scenarios for recall and detection rates were compared. Results Statistically significant savings were found for overall costs, treatment costs and the costs of additional tests in the long term. The overall cost saving was 1,115,857€ (95%CI from 932,147 to 1,299,567) in the 10th year and 2,866,124€ (95%CI from 2,492,610 to 3,239,638) in the 20th year, representing 4.5% and 8.1% of the overall cost associated with screen-film mammography. The sensitivity analysis showed net savings in the long term. Conclusions Switching to digital mammography in a population-based breast cancer screening program saves long-term budget expense, in addition to providing technical advantages. Our results were consistent across distinct scenarios representing the different results obtained in European breast cancer screening programs. PMID:24832200

  6. A Comprehensive Multistate Model Analyzing Associations of Various Risk Factors With the Course of Breast Cancer in a Population-Based Cohort of Breast Cancer Cases.

    PubMed

    Eulenburg, Christine; Schroeder, Jennifer; Obi, Nadia; Heinz, Judith; Seibold, Petra; Rudolph, Anja; Chang-Claude, Jenny; Flesch-Janys, Dieter

    2016-02-15

    We employed a semi-Markov multistate model for the simultaneous analysis of various endpoints describing the course of breast cancer. Results were compared with those from standard analyses using a Cox proportional hazards model. We included 3,012 patients with invasive breast cancer newly diagnosed between 2001 and 2005 who were recruited in Germany for a population-based study, the Mamma Carcinoma Risk Factor Investigation (MARIE Study), and prospectively followed up until the end of 2009. Locoregional recurrence and distant metastasis were included as intermediate states, and deaths from breast cancer, secondary cancer, and other causes were included as competing absorbing states. Tumor characteristics were significantly associated with all breast cancer-related endpoints. Nodal involvement was significantly related to local recurrence but more strongly related to distant metastases. Smoking was significantly associated with mortality from second cancers and other causes, whereas menopausal hormone use was significantly associated with reduced distant metastasis and death from causes other than cancer. The presence of cardiovascular disease at diagnosis was solely associated with mortality from other causes. Compared with separate Cox models, multistate models allow for dissection of prognostic factors and intermediate events in the analysis of cause-specific mortality and can yield new insights into disease progression and associated pathways.

  7. Modelling the regulatory system for diabetes mellitus with a threshold window

    NASA Astrophysics Data System (ADS)

    Yang, Jin; Tang, Sanyi; Cheke, Robert A.

    2015-05-01

    Piecewise (or non-smooth) glucose-insulin models with threshold windows for type 1 and type 2 diabetes mellitus are proposed and analyzed with a view to improving understanding of the glucose-insulin regulatory system. For glucose-insulin models with a single threshold, the existence and stability of regular, virtual, pseudo-equilibria and tangent points are addressed. Then the relations between regular equilibria and a pseudo-equilibrium are studied. Furthermore, the sufficient and necessary conditions for the global stability of regular equilibria and the pseudo-equilibrium are provided by using qualitative analysis techniques of non-smooth Filippov dynamic systems. Sliding bifurcations related to boundary node bifurcations were investigated with theoretical and numerical techniques, and insulin clinical therapies are discussed. For glucose-insulin models with a threshold window, the effects of glucose thresholds or the widths of threshold windows on the durations of insulin therapy and glucose infusion were addressed. The duration of the effects of an insulin injection is sensitive to the variation of thresholds. Our results indicate that blood glucose level can be maintained within a normal range using piecewise glucose-insulin models with a single threshold or a threshold window. Moreover, our findings suggest that it is critical to individualise insulin therapy for each patient separately, based on initial blood glucose levels.

  8. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  9. Role of propagation thresholds in sentiment-based model of opinion evolution with information diffusion

    NASA Astrophysics Data System (ADS)

    Si, Xia-Meng; Wang, Wen-Dong; Ma, Yan

    2016-06-01

    The degree of sentiment is the key factor for internet users in determining their propagating behaviors, i.e. whether participating in a discussion and whether withdrawing from a discussion. For this end, we introduce two sentiment-based propagation thresholds (i.e. infected threshold and refractory threshold) and propose an interacting model based on the Bayesian updating rules. Our model describe the phenomena that few internet users change their decisions and that someone has drop out of discussion about the topic when some others are just aware of it. Numerical simulations show that, large infected threshold restrains information diffusion but favors the lessening of extremism, while large refractory threshold facilitates decision interaction but promotes the extremism. Making netizens calm down and propagate information sanely can restrain the prevailing of extremism about rumors.

  10. A Population-Based Model to Consider the Effect of Seasonal Variation on Serum 25(OH)D and Vitamin D Status

    PubMed Central

    Vuistiner, Philippe; Rousson, Valentin; Henry, Hugues; Lescuyer, Pierre; Boulat, Olivier; Gaspoz, Jean-Michel; Mooser, Vincent; Vollenweider, Peter; Waeber, Gerard; Cornuz, Jacques; Paccaud, Fred; Bochud, Murielle; Guessous, Idris

    2015-01-01

    Background. We elaborated a model that predicts the centiles of the 25(OH)D distribution taking into account seasonal variation. Methods. Data from two Swiss population-based studies were used to generate (CoLaus) and validate (Bus Santé) the model. Serum 25(OH)D was measured by ultra high pressure LC-MS/MS and immunoassay. Linear regression models on square-root transformed 25(OH)D values were used to predict centiles of the 25(OH)D distribution. Distribution functions of the observations from the replication set predicted with the model were inspected to assess replication. Results. Overall, 4,912 and 2,537 Caucasians were included in original and replication sets, respectively. Mean (SD) 25(OH)D, age, BMI, and % of men were 47.5 (22.1) nmol/L, 49.8 (8.5) years, 25.6 (4.1) kg/m2, and 49.3% in the original study. The best model included gender, BMI, and sin-cos functions of measurement day. Sex- and BMI-specific 25(OH)D centile curves as a function of measurement date were generated. The model estimates any centile of the 25(OH)D distribution for given values of sex, BMI, and date and the quantile corresponding to a 25(OH)D measurement. Conclusions. We generated and validated centile curves of 25(OH)D in the general adult Caucasian population. These curves can help rank vitamin D centile independently of when 25(OH)D is measured. PMID:26421279

  11. Patterns of threshold evolution in polyphenic insects under different developmental models.

    PubMed

    Tomkins, Joseph L; Moczek, Armin P

    2009-02-01

    Two hypotheses address the evolution of polyphenic traits in insects. Under the developmental reprogramming model, individuals exceeding a threshold follow a different developmental pathway from individuals below the threshold. This decoupling is thought to free selection to independently hone alternative morphologies, increasing phenotypic plasticity and morphological diversity. Under the alternative model, extreme positive allometry explains the existence of alternative phenotypes and divergent phenotypes are developmentally coupled by a continuous reaction norm, such that selection on either morph acts on both. We test the hypothesis that continuous reaction norm polyphenisms, evolve through changes in the allometric parameters of even the smallest males with minimal trait expression, whereas threshold polyphenisms evolve independent of the allometric parameters of individuals below the threshold. We compare two polyphenic species; the dung beetle Onthophagus taurus, whose allometry has been modeled both as a threshold polyphenism and a continuous reaction norm and the earwig Forficula auricularia, whose allometry is best modeled with a discontinuous threshold. We find that across populations of both species, variation in forceps or horn allometry in minor males are correlated to the population's threshold. These findings suggest that regardless of developmental mode, alternative morphs do not evolve independently of one another.

  12. A phenomenological model on the kink mode threshold varying with the inclination of sheath boundary

    SciTech Connect

    Sun, X.; Intrator, T. P.; Sears, J.; Weber, T.; Liu, M.

    2013-11-15

    In nature and many laboratory plasmas, a magnetic flux tube threaded by current or a flux rope has a footpoint at a boundary. The current driven kink mode is one of the fundamental ideal magnetohydrodynamic instabilities in plasmas. It has an instability threshold that has been found to strongly depend on boundary conditions (BCs). We provide a theoretical model to explain the transition of this threshold dependence between nonline tied and line tied boundary conditions. We evaluate model parameters using experimentally measured plasma data, explicitly verify several kink eigenfunctions, and validate the model predictions for boundary conditions BCs that span the range between NLT and LT BCs. Based on this model, one could estimate the kink threshold given knowledge of the displacement of a flux rope end, or conversely estimate flux rope end motion based on knowledge of it kink stability threshold.

  13. Two-threshold model for scaling laws of noninteracting snow avalanches

    USGS Publications Warehouse

    Faillettaz, J.; Louchet, F.; Grasso, J.-R.

    2004-01-01

    A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.

  14. The threshold of a stochastic delayed SIR epidemic model with vaccination

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Jiang, Daqing

    2016-11-01

    In this paper, we study the threshold dynamics of a stochastic delayed SIR epidemic model with vaccination. We obtain sufficient conditions for extinction and persistence in the mean of the epidemic. The threshold between persistence in the mean and extinction of the stochastic system is also obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number Rbar0 of the deterministic system. Results show that time delay has important effects on the persistence and extinction of the epidemic.

  15. The threshold of a stochastic delayed SIR epidemic model with temporary immunity

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Chen, Qingmei; Jiang, Daqing

    2016-05-01

    This paper is concerned with the asymptotic properties of a stochastic delayed SIR epidemic model with temporary immunity. Sufficient conditions for extinction and persistence in the mean of the epidemic are established. The threshold between persistence in the mean and extinction of the epidemic is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.

  16. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover

  17. Natural History of Dependency in the Elderly: A 24-Year Population-Based Study Using a Longitudinal Item Response Theory Model.

    PubMed

    Edjolo, Arlette; Proust-Lima, Cécile; Delva, Fleur; Dartigues, Jean-François; Pérès, Karine

    2016-02-15

    We aimed to describe the hierarchical structure of Instrumental Activities of Daily Living (IADL) and basic Activities of Daily Living (ADL) and trajectories of dependency before death in an elderly population using item response theory methodology. Data were obtained from a population-based French cohort study, the Personnes Agées QUID (PAQUID) Study, of persons aged ≥65 years at baseline in 1988 who were recruited from 75 randomly selected areas in Gironde and Dordogne. We evaluated IADL and ADL data collected at home every 2-3 years over a 24-year period (1988-2012) for 3,238 deceased participants (43.9% men). We used a longitudinal item response theory model to investigate the item sequence of 11 IADL and ADL combined into a single scale and functional trajectories adjusted for education, sex, and age at death. The findings confirmed the earliest losses in IADL (shopping, transporting, finances) at the partial limitation level, and then an overlapping of concomitant IADL and ADL, with bathing and dressing being the earliest ADL losses, and finally total losses for toileting, continence, eating, and transferring. Functional trajectories were sex-specific, with a benefit of high education that persisted until death in men but was only transient in women. An in-depth understanding of this sequence provides an early warning of functional decline for better adaptation of medical and social care in the elderly. PMID:26825927

  18. Natural History of Dependency in the Elderly: A 24-Year Population-Based Study Using a Longitudinal Item Response Theory Model.

    PubMed

    Edjolo, Arlette; Proust-Lima, Cécile; Delva, Fleur; Dartigues, Jean-François; Pérès, Karine

    2016-02-15

    We aimed to describe the hierarchical structure of Instrumental Activities of Daily Living (IADL) and basic Activities of Daily Living (ADL) and trajectories of dependency before death in an elderly population using item response theory methodology. Data were obtained from a population-based French cohort study, the Personnes Agées QUID (PAQUID) Study, of persons aged ≥65 years at baseline in 1988 who were recruited from 75 randomly selected areas in Gironde and Dordogne. We evaluated IADL and ADL data collected at home every 2-3 years over a 24-year period (1988-2012) for 3,238 deceased participants (43.9% men). We used a longitudinal item response theory model to investigate the item sequence of 11 IADL and ADL combined into a single scale and functional trajectories adjusted for education, sex, and age at death. The findings confirmed the earliest losses in IADL (shopping, transporting, finances) at the partial limitation level, and then an overlapping of concomitant IADL and ADL, with bathing and dressing being the earliest ADL losses, and finally total losses for toileting, continence, eating, and transferring. Functional trajectories were sex-specific, with a benefit of high education that persisted until death in men but was only transient in women. An in-depth understanding of this sequence provides an early warning of functional decline for better adaptation of medical and social care in the elderly.

  19. The relationship between the Five-Factor Model personality traits and peptic ulcer disease in a large population-based adult sample.

    PubMed

    Realo, Anu; Teras, Andero; Kööts-Ausmees, Liisi; Esko, Tõnu; Metspalu, Andres; Allik, Jüri

    2015-12-01

    The current study examined the relationship between the Five-Factor Model personality traits and physician-confirmed peptic ulcer disease (PUD) diagnosis in a large population-based adult sample, controlling for the relevant behavioral and sociodemographic factors. Personality traits were assessed by participants themselves and by knowledgeable informants using the NEO Personality Inventory-3 (NEO PI-3). When controlling for age, sex, education, and cigarette smoking, only one of the five NEO PI-3 domain scales - higher Neuroticism - and two facet scales - lower A1: Trust and higher C1: Competence - made a small, yet significant contribution (p < 0.01) to predicting PUD in logistic regression analyses. In the light of these relatively modest associations, our findings imply that it is certain behavior (such as smoking) and sociodemographic variables (such as age, gender, and education) rather than personality traits that are associated with the diagnosis of PUD at a particular point in time. Further prospective studies with a longitudinal design and multiple assessments would be needed to fully understand if the FFM personality traits serve as risk factors for the development of PUD.

  20. The Rasch Rating Model and the Disordered Threshold Controversy

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark

    2012-01-01

    The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…

  1. The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide

    PubMed Central

    Folly, Walter Sydney Dutra

    2011-01-01

    Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431

  2. Immunization and epidemic threshold of an SIS model in complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Qingchu; Fu, Xinchu

    2016-02-01

    We propose an improved mean-field model to investigate immunization strategies of an SIS model in complex networks. Unlike the traditional mean-field approach, the improved model utilizes the degree information of before and after the immunization. The epidemic threshold of degree-correlated networks can be obtained by linear stability analysis. For degree-uncorrelated networks, the model is reduced to the SIS epidemic model in networks after removing the immunized nodes. Compared to the previous results of random and targeted immunization schemes on degree-uncorrelated networks, we find that the infectious disease has a lower epidemic threshold.

  3. Epidemic threshold of node-weighted susceptible-infected-susceptible models on networks

    NASA Astrophysics Data System (ADS)

    Wu, Qingchu; Zhang, Haifeng

    2016-08-01

    In this paper, we investigate the epidemic spreading on random and regular networks through a pairwise-type model with a general transmission rate to evaluate the influence of the node-weight distribution. By using block matrix theory, an epidemic threshold index is formulated to predict the epidemic outbreak. An upper bound of the epidemic threshold is obtained by analyzing the monotonicity of spectral radius for nonnegative matrices. Theoretical results suggest that the epidemic threshold is dependent on both matrices {H}(1) and {H}(2) with the first matrix being related to the mean-field model while the second one reflecting the heterogeneous transmission rates. In particular, for a linear transmission rate, this study shows the negative correlation between the heterogeneity of weight distribution and the epidemic threshold, which is different from the results for existing results from the edge-weighted networks.

  4. Analytical estimate of stochasticity thresholds in Fermi-Pasta-Ulam and straight phi(4) models.

    PubMed

    Ponno, A; Galgani, L; Guerra, F

    2000-06-01

    We consider an infinitely extended Fermi-Pasta-Ulam model. We show that the slowly modulating amplitude of a narrow wave packet asymptotically satisfies the nonlinear Schrödinger equation (NLS) on the real axis. Using well known results from inverse scattering theory, we then show that there exists a threshold of the energy of the central normal mode of the packet, with the following properties. Below threshold the NLS equation presents a quasilinear regime with no solitons in the solution of the equation, and the wave packet width remains narrow. Above threshold generation of solitons is possible instead and the packet of normal modes can spread out. Analogous results are obtained for the straight phi(4) model. We also give an analytical estimate for such thresholds. Finally, we make a comparison with the numerical results known to us and show that, they are in remarkable agreement with our estimates. PMID:11088405

  5. Analytical estimate of stochasticity thresholds in Fermi-Pasta-Ulam and straight phi(4) models.

    PubMed

    Ponno, A; Galgani, L; Guerra, F

    2000-06-01

    We consider an infinitely extended Fermi-Pasta-Ulam model. We show that the slowly modulating amplitude of a narrow wave packet asymptotically satisfies the nonlinear Schrödinger equation (NLS) on the real axis. Using well known results from inverse scattering theory, we then show that there exists a threshold of the energy of the central normal mode of the packet, with the following properties. Below threshold the NLS equation presents a quasilinear regime with no solitons in the solution of the equation, and the wave packet width remains narrow. Above threshold generation of solitons is possible instead and the packet of normal modes can spread out. Analogous results are obtained for the straight phi(4) model. We also give an analytical estimate for such thresholds. Finally, we make a comparison with the numerical results known to us and show that, they are in remarkable agreement with our estimates.

  6. Irregular seismic data reconstruction based on exponential threshold model of POCS method

    NASA Astrophysics Data System (ADS)

    Gao, Jian-Jun; Chen, Xiao-Hong; Li, Jing-Ye; Liu, Guo-Chang; Ma, Jian

    2010-09-01

    Irregular seismic data causes problems with multi-trace processing algorithms and degrades processing quality. We introduce the Projection onto Convex Sets (POCS) based image restoration method into the seismic data reconstruction field to interpolate irregularly missing traces. For entire dead traces, we transfer the POCS iteration reconstruction process from the time to frequency domain to save computational cost because forward and reverse Fourier time transforms are not needed. In each iteration, the selection threshold parameter is important for reconstruction efficiency. In this paper, we designed two types of threshold models to reconstruct irregularly missing seismic data. The experimental results show that an exponential threshold can greatly reduce iterations and improve reconstruction efficiency compared to a linear threshold for the same reconstruction result. We also analyze the antinoise and anti-alias ability of the POCS reconstruction method. Finally, theoretical model tests and real data examples indicate that the proposed method is efficient and applicable.

  7. Using Fixed Thresholds with Grouped Data in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Koran, Jennifer; Hancock, Gregory R.

    2010-01-01

    Valuable methods have been developed for incorporating ordinal variables into structural equation models using a latent response variable formulation. However, some model parameters, such as the means and variances of latent factors, can be quite difficult to interpret because the latent response variables have an arbitrary metric. This limitation…

  8. Investigating the genetic architecture of conditional strategies using the environmental threshold model.

    PubMed

    Buzatto, Bruno A; Buoro, Mathieu; Hazel, Wade N; Tomkins, Joseph L

    2015-12-22

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a 'half-sib common environment' and a 'family-level split environment' experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic 'proximate' cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions. PMID:26674955

  9. A two-step framework for over-threshold modelling of environmental extremes

    NASA Astrophysics Data System (ADS)

    Bernardara, P.; Mazas, F.; Kergadallan, X.; Hamm, L.

    2014-03-01

    The evaluation of the probability of occurrence of extreme natural events is important for the protection of urban areas, industrial facilities and others. Traditionally, the extreme value theory (EVT) offers a valid theoretical framework on this topic. In an over-threshold modelling (OTM) approach, Pickands' theorem, (Pickands, 1975) states that, for a sample composed by independent and identically distributed (i.i.d.) values, the distribution of the data exceeding a given threshold converges through a generalized Pareto distribution (GPD). Following this theoretical result, the analysis of realizations of environmental variables exceeding a threshold spread widely in the literature. However, applying this theorem to an auto-correlated time series logically involves two successive and complementary steps: the first one is required to build a sample of i.i.d. values from the available information, as required by the EVT; the second to set the threshold for the optimal convergence toward the GPD. In the past, the same threshold was often employed both for sampling observations and for meeting the hypothesis of extreme value convergence. This confusion can lead to an erroneous understanding of methodologies and tools available in the literature. This paper aims at clarifying the conceptual framework involved in threshold selection, reviewing the available methods for the application of both steps and illustrating it with a double threshold approach.

  10. Framework for determining airport daily departure and arrival delay thresholds: statistical modelling approach.

    PubMed

    Wesonga, Ronald; Nabugoomu, Fabian

    2016-01-01

    The study derives a framework for assessing airport efficiency through evaluating optimal arrival and departure delay thresholds. Assumptions of airport efficiency measurements, though based upon minimum numeric values such as 15 min of turnaround time, cannot be extrapolated to determine proportions of delay-days of an airport. This study explored the concept of delay threshold to determine the proportion of delay-days as an expansion of the theory of delay and our previous work. Data-driven approach using statistical modelling was employed to a limited set of determinants of daily delay at an airport. For the purpose of testing the efficacy of the threshold levels, operational data for Entebbe International Airport were used as a case study. Findings show differences in the proportions of delay at departure (μ = 0.499; 95 % CI = 0.023) and arrival (μ = 0.363; 95 % CI = 0.022). Multivariate logistic model confirmed an optimal daily departure and arrival delay threshold of 60 % for the airport given the four probable thresholds {50, 60, 70, 80}. The decision for the threshold value was based on the number of significant determinants, the goodness of fit statistics based on the Wald test and the area under the receiver operating curves. These findings propose a modelling framework to generate relevant information for the Air Traffic Management relevant in planning and measurement of airport operational efficiency. PMID:27441145

  11. Modeling the Threshold Wind Speed for Saltation Initiation over Heterogeneous Sand Beds

    NASA Astrophysics Data System (ADS)

    Turney, F. A.; Martin, R. L.; Kok, J. F.

    2015-12-01

    Initiation of aeolian sediment transport is key to understanding the formation of dunes, emission of dust into the atmosphere, and landscape erosion. Previous models of the threshold wind speed required for saltation initiation have assumed that the particle bed is monodisperse and homogeneous in arrangement, thereby ignoring what is in reality a distribution of particle lifting thresholds, influenced by variability in soil particle sizes and bed geometry. To help overcome this problem, we present a numerical model that determines the distribution of threshold wind speeds required for particle lifting for a given soil size distribution. The model results are evaluated against high frequency wind speed and saltation data from a recent field campaign in Oceano Dunes in Southern California. The results give us insight into the range of lifting thresholds present during incipient sediment transport and the simplifications that are often made to characterize the process. In addition, this study provides a framework for moving beyond the 'fluid threshold' paradigm, which is known to be inaccurate, especially for near-threshold conditions.

  12. Medication Adherence Patterns after Hospitalization for Coronary Heart Disease. A Population-Based Study Using Electronic Records and Group-Based Trajectory Models

    PubMed Central

    Librero, Julián; Sanfélix-Gimeno, Gabriel; Peiró, Salvador

    2016-01-01

    Objective To identify adherence patterns over time and their predictors for evidence-based medications used after hospitalization for coronary heart disease (CHD). Patients and Methods We built a population-based retrospective cohort of all patients discharged after hospitalization for CHD from public hospitals in the Valencia region (Spain) during 2008 (n = 7462). From this initial cohort, we created 4 subcohorts with at least one prescription (filled or not) from each therapeutic group (antiplatelet, beta-blockers, ACEI/ARB, statins) within the first 3 months after discharge. Monthly adherence was defined as having ≥24 days covered out of 30, leading to a repeated binary outcome measure. We assessed the membership to trajectory groups of adherence using group-based trajectory models. We also analyzed predictors of the different adherence patterns using multinomial logistic regression. Results We identified a maximum of 5 different adherence patterns: 1) Nearly-always adherent patients; 2) An early gap in adherence with a later recovery; 3) Brief gaps in medication use or occasional users; 4) A slow decline in adherence; and 5) A fast decline. These patterns represented variable proportions of patients, the descending trajectories being more frequent for the beta-blocker and ACEI/ARB cohorts (16% and 17%, respectively) than the antiplatelet and statin cohorts (10% and 8%, respectively). Predictors of poor or intermediate adherence patterns were having a main diagnosis of unstable angina or other forms of CHD vs. AMI in the index hospitalization, being born outside Spain, requiring copayment or being older. Conclusion Distinct adherence patterns over time and their predictors were identified. This may be a useful approach for targeting improvement interventions in patients with poor adherence patterns. PMID:27551748

  13. Estimating nerve excitation thresholds to cutaneous electrical stimulation by finite element modeling combined with a stochastic branching nerve fiber model.

    PubMed

    Mørch, Carsten Dahl; Hennings, Kristian; Andersen, Ole Kæseler

    2011-04-01

    Electrical stimulation of cutaneous tissue through surface electrodes is an often used method for evoking experimental pain. However, at painful intensities both non-nociceptive Aβ-fibers and nociceptive Aδ- and C-fibers may be activated by the electrical stimulation. This study proposes a finite element (FE) model of the extracellular potential and stochastic branching fiber model of the afferent fiber excitation thresholds. The FE model described four horizontal layers; stratum corneum, epidermis, dermis, and hypodermal used to estimate the excitation threshold of Aβ-fibers terminating in dermis and Aδ-fibers terminating in epidermis. The perception thresholds of 11 electrodes with diameters ranging from 0.2 to 20 mm were modeled and assessed on the volar forearm of healthy human volunteers by an adaptive two-alternative forced choice algorithm. The model showed that the magnitude of the current density was highest for smaller electrodes and decreased through the skin. The excitation thresholds of the Aδ-fibers were lower than the excitation thresholds of Aβ-fibers when current was applied through small, but not large electrodes. The experimentally assessed perception threshold followed the lowest excitation threshold of the modeled fibers. The model confirms that preferential excitation of Aδ-fibers may be achieved by small electrode stimulation due to higher current density in the dermoepidermal junction. PMID:21207174

  14. Does Imaging Technology Cause Cancer? Debunking the Linear No-Threshold Model of Radiation Carcinogenesis.

    PubMed

    Siegel, Jeffry A; Welsh, James S

    2016-04-01

    In the past several years, there has been a great deal of attention from the popular media focusing on the alleged carcinogenicity of low-dose radiation exposures received by patients undergoing medical imaging studies such as X-rays, computed tomography scans, and nuclear medicine scintigraphy. The media has based its reporting on the plethora of articles published in the scientific literature that claim that there is "no safe dose" of ionizing radiation, while essentially ignoring all the literature demonstrating the opposite point of view. But this reported "scientific" literature in turn bases its estimates of cancer induction on the linear no-threshold hypothesis of radiation carcinogenesis. The use of the linear no-threshold model has yielded hundreds of articles, all of which predict a definite carcinogenic effect of any dose of radiation, regardless of how small. Therefore, hospitals and professional societies have begun campaigns and policies aiming to reduce the use of certain medical imaging studies based on perceived risk:benefit ratio assumptions. However, as they are essentially all based on the linear no-threshold model of radiation carcinogenesis, the risk:benefit ratio models used to calculate the hazards of radiological imaging studies may be grossly inaccurate if the linear no-threshold hypothesis is wrong. Here, we review the myriad inadequacies of the linear no-threshold model and cast doubt on the various studies based on this overly simplistic model.

  15. Threshold conditions for SIS epidemic models on edge-weighted networks

    NASA Astrophysics Data System (ADS)

    Wu, Qingchu; Zhang, Fei

    2016-07-01

    We consider the disease dynamics of a susceptible-infected-susceptible model in weighted random and regular networks. By using the pairwise approximation, we build an edge-based compartment model, from which the condition of epidemic outbreak is obtained. Our results suggest that there exists a remarkable difference between the linear and nonlinear transmission rate. For a linear transmission rate, the epidemic threshold is completely determined by the mean weight, which is different from the susceptible-infected-recovered model framework. While for a nonlinear transmission rate, the epidemic threshold is not only related to the mean weight, but also closely related to the heterogeneity of weight distribution.

  16. A profile-aware resist model with variable threshold

    NASA Astrophysics Data System (ADS)

    Moulis, Sylvain; Farys, Vincent; Belledent, Jérôme; Thérèse, Romain; Lan, Song; Zhao, Qian; Feng, Mu; Depre, Laurent; Dover, Russell

    2012-11-01

    The pursuit of ever smaller transistors has pushed technological innovations in the field of lithography. In order to continue following the path of Moore's law, several solutions have been proposed: EUV, e-beam and double patterning lithography. As EUV and e-beam lithography are still not ready for mass production for 20 nm and 14 nm nodes, double patterning lithography play an important role for these nodes. In this work, we focus on a Self-Aligned Double-Patterning process (SADP) which consists of depositing a spacer material on each side of a mandrel exposed during a first lithography step, dividing the pitch into two, after being transferred into the substrate, and then cutting the unwanted patterns through a second lithography exposure. In the specific case where spacers are deposited directly on the flanks of the resist, it is crucial to control its profile as it could induce final CD errors or even spacer collapse. One possibility to prevent these defects from occurring is to predict the profile of the resist at the OPc verification stage. For that, we need an empirical resist model that is able to predict such behaviour. This work is a study of a profile-aware resist model that is calibrated using both atomic force microscopy (AFM) and scanning electron microscopy (SEM) data, both taken using a focus and exposure matrix (FEM).

  17. Methods for Assessing Item, Step, and Threshold Invariance in Polytomous Items Following the Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Myers, Nicholas D.; Wolfe, Edward W.

    2008-01-01

    Measurement invariance in the partial credit model (PCM) can be conceptualized in several different but compatible ways. In this article the authors distinguish between three forms of measurement invariance in the PCM: step invariance, item invariance, and threshold invariance. Approaches for modeling these three forms of invariance are proposed,…

  18. Study on the threshold of a stochastic SIR epidemic model and its extensions

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli

    2016-09-01

    This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.

  19. Modeling Soil Quality Thresholds to Ecosystem Recovery at Fort Benning, Georgia, USA

    SciTech Connect

    Garten Jr., C.T.

    2004-03-08

    The objective of this research was to use a simple model of soil C and N dynamics to predict nutrient thresholds to ecosystem recovery on degraded soils at Fort Benning, Georgia, in the southeastern USA. The model calculates aboveground and belowground biomass, soil C inputs and dynamics, soil N stocks and availability, and plant N requirements. A threshold is crossed when predicted soil N supplies fall short of predicted N required to sustain biomass accrual at a specified recovery rate. Four factors were important to development of thresholds to recovery: (1) initial amounts of aboveground biomass, (2) initial soil C stocks (i.e., soil quality), (3) relative recovery rates of biomass, and (4) soil sand content. Thresholds to ecosystem recovery predicted by the model should not be interpreted independent of a specified recovery rate. Initial soil C stocks influenced the predicted patterns of recovery by both old field and forest ecosystems. Forests and old fields on soils with varying sand content had different predicted thresholds to recovery. Soil C stocks at barren sites on Fort Benning generally lie below predicted thresholds to 100% recovery of desired future ecosystem conditions defined on the basis of aboveground biomass (18000 versus 360 g m{sup -2} for forests and old fields, respectively). Calculations with the model indicated that reestablishment of vegetation on barren sites to a level below the desired future condition is possible at recovery rates used in the model, but the time to 100% recovery of desired future conditions, without crossing a nutrient threshold, is prolonged by a reduced rate of forest growth. Predicted thresholds to ecosystem recovery were less on soils with more than 70% sand content. The lower thresholds for old field and forest recovery on more sandy soils are apparently due to higher relative rates of net soil N mineralization in more sandy soils. Calculations with the model indicate that a combination of desired future

  20. Complex Dynamic Thresholds and Generation of the Action Potentials in the Neural-Activity Model

    NASA Astrophysics Data System (ADS)

    Kirillov, S. Yu.; Nekorkin, V. I.

    2016-05-01

    This work is devoted to studying the processes of activation of the neurons whose excitation thresholds are not constant and vary in time (the so-called dynamic thresholds). The neuron dynamics is described by the FitzHugh-Nagumo model with nonlinear behavior of the recovery variable. The neuron response to the external pulsed activating action in the presence of a slowly varying synaptic current is studied within the framework of this model. The structure of the dynamic threshold is studied and its properties depending on the external-action parameters are established. It is found that the formation of the "folds" in the separatrix threshold manifold in the model phase space is a typical feature of the complex dynamic threshold. High neuron sensitivity to the action of the comparatively weak slow control signals is established. This explains the capability of the neurons to perform flexible tuning of their selective properties for detecting various external signals in sufficiently short times (of the order of duration of several spikes).

  1. Uncertainty in urban stormwater quality modelling: the effect of acceptability threshold in the GLUE methodology.

    PubMed

    Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare

    2008-04-01

    Uncertainty analysis in integrated urban drainage modelling is of growing importance in the field of water quality. However, only few studies deal with uncertainty quantification in urban drainage modelling; furthermore, the few existing studies mainly focus on quantitative sewer flow modelling rather than uncertainty in water quality aspects. In this context, the generalised likelihood uncertainty estimation (GLUE) methodology was applied for the evaluation of the uncertainty of an integrated urban drainage model and some of its subjective hypotheses have been explored. More specifically, the influence of the subjective choice of the acceptability threshold has been detected in order to gain insights regarding its effect on the model results. The model has been applied to the Savena case study (Bologna, Italy) where water quality and quantity data were available. The model results show a strong influence of the acceptability threshold selection and confirm the importance of modeller's experience in the application of GLUE uncertainty analysis.

  2. Using a combined population-based and kinetic modelling approach to assess timescales and durations of magma migration activities prior to the 1669 flank eruption of Mt. Etna

    NASA Astrophysics Data System (ADS)

    Kahl, M.; Morgan, D. J.; Viccaro, M.; Dingwell, D. B.

    2015-12-01

    The March-July eruption of Mt. Etna in 1669 is ranked as one of the most destructive and voluminous eruptions of Etna volcano in historical times. To assess threats from future eruptions, a better understanding of how and over what timescales magma moved underground prior to and during the 1669 eruption is required. We present a combined population based and kinetic modelling approach [1-2] applied to 185 olivine crystals that erupted during the 1669 eruption. By means of this approach we provide, for the first time, a dynamic picture of magma mixing and magma migration activity prior to and during the 1669 flank eruption of Etna volcano. Following the work of [3] we have studied 10 basaltic lava samples (five SET1 and five SET2 samples) that were erupted from different fissures that opened between 950 and 700 m a.s.l. Following previous work [1-2] we were able to classify different populations of olivine based on their overall core and rim compositional record and the prevalent zoning type (i.e. normal vs. reverse). The core plateau compositions of the SET1 and SET2 olivines range from Fo70 up to Fo83 with a single peak at Fo75-76. The rims differ significantly and can be distinguished into two different groups. Olivine rims from the SET1 samples are generally more evolved and range from Fo50 to Fo64 with a maximum at Fo55-57. SET2 olivine rims vary between Fo65-75 with a peak at Fo69. SET1 and SET2 olivines display normal zonation with cores at Fo75-76 and diverging rim records (Fo55-57 and Fo65-75). The diverging core and rim compositions recorded in the SET1 and SET2 olivines can be attributed to magma evolution possibly in three different magmatic environments (MEs): M1 (=Fo75-76), M2 (=Fo69) and M3 (=Fo55-57) with magma transfer and mixing amongst them. The MEs established in this study differ slightly from those identified in previous works [1-2]. We note the relative lack of olivines with Fo-rich core and rim compositions indicating a major mafic magma

  3. The threshold of a stochastic SIVS epidemic model with nonlinear saturated incidence

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli; Zhang, Tiansi; Yuan, Sanling

    2016-02-01

    A stochastic version of the SIS epidemic model with vaccination (SIVS) is studied. When the noise is small, the threshold parameter is identified, which determines the extinction and persistence of the epidemic. Besides, the results show that large noise will suppress the epidemic from prevailing regardless of the saturated incidence. The results are illustrated by computer simulations.

  4. Vibrotactile sensitivity threshold: nonlinear stochastic mechanotransduction model of the Pacinian Corpuscle.

    PubMed

    Biswas, Abhijit; Manivannan, M; Srinivasan, Mandayam A

    2015-01-01

    Based on recent discoveries of stretch and voltage activated ion channels in the receptive area of the Pacinian Corpuscle (PC), this paper describes a two-stage mechanotransduction model of its near threshold Vibrotactile (VT) sensitivity valid over 10 Hz to a few kHz. The model is based on the nonlinear and stochastic behavior of the ion channels represented as dependent charge sources loaded with membrane impedance. It simulates the neural response of the PC considering the morphological and statistical properties of the receptor potential and action potential with the help of an adaptive relaxation pulse frequency modulator. This model also simulates the plateaus and nonmonotonic saturation of spike rate characteristics. The stochastic simulation based on the addition of mechanical and neural noise describes that the VT Sensitivity Threshold (VTST) at higher frequencies is more noise dependent. Above 800 Hz even a SNR = 150 improves the neurophysiological VTST more than 3 dBμ. In that frequency range, an absence of the entrainment threshold and a lower sensitivity index near the absolute threshold make the upper bound of the psychophysical VTST more dependent on the experimental protocol and physical set-up. This model can be extended to simulate the neural response of a group of PCs.

  5. Local Bifurcations and Optimal Theory in a Delayed Predator-Prey Model with Threshold Prey Harvesting

    NASA Astrophysics Data System (ADS)

    Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel

    2015-06-01

    We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.

  6. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    DOE PAGESBeta

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; Brown, Christopher F.

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  7. Predicting the epidemic threshold of the susceptible-infected-recovered model

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Liu, Quan-Hui; Zhong, Lin-Feng; Tang, Ming; Gao, Hui; Stanley, H. Eugene

    2016-04-01

    Researchers have developed several theoretical methods for predicting epidemic thresholds, including the mean-field like (MFL) method, the quenched mean-field (QMF) method, and the dynamical message passing (DMP) method. When these methods are applied to predict epidemic threshold they often produce differing results and their relative levels of accuracy are still unknown. We systematically analyze these two issues—relationships among differing results and levels of accuracy—by studying the susceptible-infected-recovered (SIR) model on uncorrelated configuration networks and a group of 56 real-world networks. In uncorrelated configuration networks the MFL and DMP methods yield identical predictions that are larger and more accurate than the prediction generated by the QMF method. As for the 56 real-world networks, the epidemic threshold obtained by the DMP method is more likely to reach the accurate epidemic threshold because it incorporates full network topology information and some dynamical correlations. We find that in most of the networks with positive degree-degree correlations, an eigenvector localized on the high k-core nodes, or a high level of clustering, the epidemic threshold predicted by the MFL method, which uses the degree distribution as the only input information, performs better than the other two methods.

  8. Predicting the epidemic threshold of the susceptible-infected-recovered model

    PubMed Central

    Wang, Wei; Liu, Quan-Hui; Zhong, Lin-Feng; Tang, Ming; Gao, Hui; Stanley, H. Eugene

    2016-01-01

    Researchers have developed several theoretical methods for predicting epidemic thresholds, including the mean-field like (MFL) method, the quenched mean-field (QMF) method, and the dynamical message passing (DMP) method. When these methods are applied to predict epidemic threshold they often produce differing results and their relative levels of accuracy are still unknown. We systematically analyze these two issues—relationships among differing results and levels of accuracy—by studying the susceptible-infected-recovered (SIR) model on uncorrelated configuration networks and a group of 56 real-world networks. In uncorrelated configuration networks the MFL and DMP methods yield identical predictions that are larger and more accurate than the prediction generated by the QMF method. As for the 56 real-world networks, the epidemic threshold obtained by the DMP method is more likely to reach the accurate epidemic threshold because it incorporates full network topology information and some dynamical correlations. We find that in most of the networks with positive degree-degree correlations, an eigenvector localized on the high k-core nodes, or a high level of clustering, the epidemic threshold predicted by the MFL method, which uses the degree distribution as the only input information, performs better than the other two methods. PMID:27091705

  9. Analytical Model of Subthreshold Current and Threshold Voltage for Fully Depleted Double-Gated Junctionless Transistor

    NASA Astrophysics Data System (ADS)

    Lin, Zer-Ming; Lin, Horng-Chih; Liu, Keng-Ming; Huang, Tiao-Yuan

    2012-02-01

    In this study, we derive an analytical model of an electric potential of a double-gated (DG) fully depleted (FD) junctionless (J-less) transistor by solving the two-dimensional Poisson's equation. On the basis of this two-dimensional electric potential model, subthreshold current and swing can be calculated. Threshold voltage roll-off can also be estimated with analytical forms derived using the above model. The calculated results of electric potential, subthreshold current and threshold voltage roll-off are all in good agreement with the results of technology computer aided design (TCAD) simulation. The model proposed in this paper may help in the development of a compact model for simulation program with integrated circuit emphasis (SPICE) simulation and in providing deeper insights into the characteristics of short-channel J-less transistors.

  10. Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection

    NASA Astrophysics Data System (ADS)

    Naveau, Philippe; Huser, Raphael; Ribereau, Pierre; Hannart, Alexis

    2016-04-01

    In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.

  11. Data-driven prediction of thresholded time series of rainfall and self-organized criticality models

    NASA Astrophysics Data System (ADS)

    Deluca, Anna; Moloney, Nicholas R.; Corral, Álvaro

    2015-05-01

    We study the occurrence of events, subject to threshold, in a representative self-organized criticality (SOC) sandpile model and in high-resolution rainfall data. The predictability in both systems is analyzed by means of a decision variable sensitive to event clustering, and the quality of the predictions is evaluated by the receiver operating characteristic (ROC) method. In the case of the SOC sandpile model, the scaling of quiet-time distributions with increasing threshold leads to increased predictability of extreme events. A scaling theory allows us to understand all the details of the prediction procedure and to extrapolate the shape of the ROC curves for the most extreme events. For rainfall data, the quiet-time distributions do not scale for high thresholds, which means that the corresponding ROC curves cannot be straightforwardly related to those for lower thresholds. In this way, ROC curves are useful for highlighting differences in predictability of extreme events between toy models and real-world phenomena.

  12. Spatial and temporal signatures of fragility and threshold proximity in modelled semi-arid vegetation

    PubMed Central

    Bailey, R. M.

    2011-01-01

    Understanding the behaviour of complex environmental systems, particularly as critical thresholds are approached, is vitally important in many contexts. Among these are the moisture-limited vegetation systems in semi-arid (SA) regions of the World, which support approximately 36 per cent of the human population, maintain considerable biodiversity and which are susceptible to rapid stress-induced collapse. Change in spatially self-organized vegetation patterning has previously been proposed as a means of identifying approaching thresholds in these systems. In this paper, a newly developed cellular automata model is used to explore spatial patterning and also the temporal dynamics of SA vegetation cover. Results show, for the first time, to my knowledge, in a cellular automata model, that ‘critical slowdown’ (a pronounced reduction in post-perturbation recovery rates) provides clear signals of system fragility as major thresholds are approached. A consequence of slowing recovery rates is the appearance of quasi-stable population states and increased potential for perturbation-induced multi-staged population collapse. The model also predicts a non-patterned cover where environmental stress levels are high, or where more moderate stress levels are accompanied by frequent perturbations. In the context of changing climatic and environmental pressures, these results provide observable indicators of fragility and threshold proximity in SA vegetation systems that have direct relevance to management policies. PMID:20943693

  13. Evolution of self-organized division of labor in a response threshold model.

    PubMed

    Duarte, Ana; Pen, Ido; Keller, Laurent; Weissing, Franz J

    2012-06-01

    Division of labor in social insects is determinant to their ecological success. Recent models emphasize that division of labor is an emergent property of the interactions among nestmates obeying to simple behavioral rules. However, the role of evolution in shaping these rules has been largely neglected. Here, we investigate a model that integrates the perspectives of self-organization and evolution. Our point of departure is the response threshold model, where we allow thresholds to evolve. We ask whether the thresholds will evolve to a state where division of labor emerges in a form that fits the needs of the colony. We find that division of labor can indeed evolve through the evolutionary branching of thresholds, leading to workers that differ in their tendency to take on a given task. However, the conditions under which division of labor evolves depend on the strength of selection on the two fitness components considered: amount of work performed and on worker distribution over tasks. When selection is strongest on the amount of work performed, division of labor evolves if switching tasks is costly. When selection is strongest on worker distribution, division of labor is less likely to evolve. Furthermore, we show that a biased distribution (like 3:1) of workers over tasks is not easily achievable by a threshold mechanism, even under strong selection. Contrary to expectation, multiple matings of colony foundresses impede the evolution of specialization. Overall, our model sheds light on the importance of considering the interaction between specific mechanisms and ecological requirements to better understand the evolutionary scenarios that lead to division of labor in complex systems. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00265-012-1343-2) contains supplementary material, which is available to authorized users.

  14. Near-threshold boson pair production in the model of smeared-mass unstable particles

    SciTech Connect

    Kuksa, V. I.; Pasechnik, R. S.

    2010-09-15

    Near-threshold production of boson pairs is considered within the framework of the model of unstable particles with smeared mass. We describe the principal aspects of the model and consider the strategy of calculations including the radiative corrections. The results of calculations are in good agreement with LEP II data and Monte-Carlo simulations. Suggested approach significantly simplifies calculations with respect to the standard perturbative one.

  15. A preliminary threshold model of parasitism in the Cockle Cerastoderma edule using delayed exchange of stability

    NASA Astrophysics Data System (ADS)

    O'Grady, E. A.; Culloty, S. C.; Kelly, T. C.; O'Callaghan, M. J. A.; Rachinskii, D.

    2015-02-01

    Thresholds occur, and play an important role, in the dynamics of many biological communities. In this paper, we model a persistence type threshold which has been shown experimentally to exist in hyperparasitised flukes in the cockle, a shellfish. Our model consists of a periodically driven slow-fast host-parasite system of equations for a slow flukes population (host) and a fast Unikaryon hyperparasite population (parasite). The model exhibits two branches of the critical curve crossing in a transcritical bifurcation scenario. We discuss two thresholds due to immediate and delayed exchange of stability effects; and we derive algebraic relationships for parameters of the periodic solution in the limit of the infinite ratio of the time scales. Flukes, which are the host species in our model, parasitise cockles and in turn are hyperparasitised by the microsporidian Unikaryon legeri; the life cycle of flukes includes several life stages and a number of different hosts. That is, the flukes-hyperparasite system in a cockle is, naturally, part of a larger estuarine ecosystem of interacting species involving parasites, shellfish and birds which prey on shellfish. A population dynamics model which accounts for one system of such multi-species interactions and includes the fluke-hyperparasite model in a cockle as a subsystem is presented. We provide evidence that the threshold effect we observed in the flukes-hyperparasite subsystem remains apparent in the multi-species system. Assuming that flukes damage cockles, and taking into account that the hyperparasite is detrimental to flukes, it is natural to suggest that the hyperparasitism may support the abundance of cockles and, thereby, the persistence of the estuarine ecosystem, including shellfish and birds. We confirm the possibility of the existence of this scenario in our model, at least partially, by removing the hyperparasite and demonstrating that this may result in a substantial drop in cockle numbers. The result

  16. Threshold voltage model of junctionless cylindrical surrounding gate MOSFETs including fringing field effects

    NASA Astrophysics Data System (ADS)

    Gupta, Santosh Kumar

    2015-12-01

    2D Analytical model of the body center potential (BCP) in short channel junctionless Cylindrical Surrounding Gate (JLCSG) MOSFETs is developed using evanescent mode analysis (EMA). This model also incorporates the gate bias dependent inner and outer fringing capacitances due to the gate-source/drain fringing fields. The developed model provides results in good agreement with simulated results for variations of different physical parameters of JLCSG MOSFET viz. gate length, channel radius, doping concentration, and oxide thickness. Using the BCP, an analytical model for the threshold voltage has been derived and validated against results obtained from 3D device simulator.

  17. Percolation thresholds for discrete-continuous models with nonuniform probabilities of bond formation.

    PubMed

    Szczygieł, Bartłomiej; Dudyński, Marek; Kwiatkowski, Kamil; Lewenstein, Maciej; Lapeyre, Gerald John; Wehr, Jan

    2016-02-01

    We introduce a class of discrete-continuous percolation models and an efficient Monte Carlo algorithm for computing their properties. The class is general enough to include well-known discrete and continuous models as special cases. We focus on a particular example of such a model, a nanotube model of disintegration of activated carbon. We calculate its exact critical threshold in two dimensions and obtain a Monte Carlo estimate in three dimensions. Furthermore, we use this example to analyze and characterize the efficiency of our algorithm, by computing critical exponents and properties, finding that it compares favorably to well-known algorithms for simpler systems.

  18. Recognition ROCS Are Curvilinear--Or Are They? On Premature Arguments against the Two-High-Threshold Model of Recognition

    ERIC Educational Resources Information Center

    Broder, Arndt; Schutz, Julia

    2009-01-01

    Recent reviews of recognition receiver operating characteristics (ROCs) claim that their curvilinear shape rules out threshold models of recognition. However, the shape of ROCs based on confidence ratings is not diagnostic to refute threshold models, whereas ROCs based on experimental bias manipulations are. Also, fitting predicted frequencies to…

  19. Computational model of collective nest selection by ants with heterogeneous acceptance thresholds

    PubMed Central

    Masuda, Naoki; O'shea-Wheller, Thomas A.; Doran, Carolina; Franks, Nigel R.

    2015-01-01

    Collective decision-making is a characteristic of societies ranging from ants to humans. The ant Temnothorax albipennis is known to use quorum sensing to collectively decide on a new home; emigration to a new nest site occurs when the number of ants favouring the new site becomes quorate. There are several possible mechanisms by which ant colonies can select the best nest site among alternatives based on a quorum mechanism. In this study, we use computational models to examine the implications of heterogeneous acceptance thresholds across individual ants in collective nest choice behaviour. We take a minimalist approach to develop a differential equation model and a corresponding non-spatial agent-based model. We show, consistent with existing empirical evidence, that heterogeneity in acceptance thresholds is a viable mechanism for efficient nest choice behaviour. In particular, we show that the proposed models show speed–accuracy trade-offs and speed–cohesion trade-offs when we vary the number of scouts or the quorum threshold. PMID:26543578

  20. Computational model of collective nest selection by ants with heterogeneous acceptance thresholds.

    PubMed

    Masuda, Naoki; O'shea-Wheller, Thomas A; Doran, Carolina; Franks, Nigel R

    2015-06-01

    Collective decision-making is a characteristic of societies ranging from ants to humans. The ant Temnothorax albipennis is known to use quorum sensing to collectively decide on a new home; emigration to a new nest site occurs when the number of ants favouring the new site becomes quorate. There are several possible mechanisms by which ant colonies can select the best nest site among alternatives based on a quorum mechanism. In this study, we use computational models to examine the implications of heterogeneous acceptance thresholds across individual ants in collective nest choice behaviour. We take a minimalist approach to develop a differential equation model and a corresponding non-spatial agent-based model. We show, consistent with existing empirical evidence, that heterogeneity in acceptance thresholds is a viable mechanism for efficient nest choice behaviour. In particular, we show that the proposed models show speed-accuracy trade-offs and speed-cohesion trade-offs when we vary the number of scouts or the quorum threshold. PMID:26543578

  1. Some Observations about the Nearest-Neighbor Model of the Error Threshold

    SciTech Connect

    Gerrish, Philip J.

    2009-09-09

    I explore some aspects of the 'error threshold' - a critical mutation rate above which a population is nonviable. The phase transition that occurs as mutation rate crosses this threshold has been shown to be mathematically equivalent to the loss of ferromagnetism that occurs as temperature exceeds the Curie point. I will describe some refinements and new results based on the simplest of these mutation models, will discuss the commonly unperceived robustness of this simple model, and I will show some preliminary results comparing qualitative predictions with simulations of finite populations adapting at high mutation rates. I will talk about how these qualitative predictions are relevant to biomedical science and will discuss how my colleagues and I are looking for phase-transition signatures in real populations of Escherichia coli that go extinct as a result of excessive mutation.

  2. Two-threshold model for scaling laws of noninteracting snow avalanches.

    PubMed

    Faillettaz, Jerome; Louchet, Francois; Grasso, Jean-Robert

    2004-11-12

    The sizes of snow slab failure that trigger snow avalanches are power-law distributed. Such a power-law probability distribution function has also been proposed to characterize different landslide types. In order to understand this scaling for gravity-driven systems, we introduce a two-threshold 2D cellular automaton, in which failure occurs irreversibly. Taking snow slab avalanches as a model system, we find that the sizes of the largest avalanches just preceding the lattice system breakdown are power-law distributed. By tuning the maximum value of the ratio of the two failure thresholds our model reproduces the range of power-law exponents observed for land, rock, or snow avalanches. We suggest this control parameter represents the material cohesion anisotropy. PMID:15600971

  3. Integrating physiological threshold experiments with climate modeling to project mangrove species' range expansion.

    PubMed

    Cavanaugh, Kyle C; Parker, John D; Cook-Patton, Susan C; Feller, Ilka C; Williams, A Park; Kellner, James R

    2015-05-01

    Predictions of climate-related shifts in species ranges have largely been based on correlative models. Due to limitations of these models, there is a need for more integration of experimental approaches when studying impacts of climate change on species distributions. Here, we used controlled experiments to identify physiological thresholds that control poleward range limits of three species of mangroves found in North America. We found that all three species exhibited a threshold response to extreme cold, but freeze tolerance thresholds varied among species. From these experiments, we developed a climate metric, freeze degree days (FDD), which incorporates both the intensity and the frequency of freezes. When included in distribution models, FDD accurately predicted mangrove presence/absence. Using 28 years of satellite imagery, we linked FDD to observed changes in mangrove abundance in Florida, further exemplifying the importance of extreme cold. We then used downscaled climate projections of FDD to project that these range limits will move northward by 2.2-3.2 km yr(-1) over the next 50 years.

  4. Integrating physiological threshold experiments with climate modeling to project mangrove species' range expansion.

    PubMed

    Cavanaugh, Kyle C; Parker, John D; Cook-Patton, Susan C; Feller, Ilka C; Williams, A Park; Kellner, James R

    2015-05-01

    Predictions of climate-related shifts in species ranges have largely been based on correlative models. Due to limitations of these models, there is a need for more integration of experimental approaches when studying impacts of climate change on species distributions. Here, we used controlled experiments to identify physiological thresholds that control poleward range limits of three species of mangroves found in North America. We found that all three species exhibited a threshold response to extreme cold, but freeze tolerance thresholds varied among species. From these experiments, we developed a climate metric, freeze degree days (FDD), which incorporates both the intensity and the frequency of freezes. When included in distribution models, FDD accurately predicted mangrove presence/absence. Using 28 years of satellite imagery, we linked FDD to observed changes in mangrove abundance in Florida, further exemplifying the importance of extreme cold. We then used downscaled climate projections of FDD to project that these range limits will move northward by 2.2-3.2 km yr(-1) over the next 50 years. PMID:25558057

  5. Generalizing a complex model for gully threshold identification in the Mediterranean environment

    NASA Astrophysics Data System (ADS)

    Torri, D.; Borselli, L.; Iaquinta, P.; Iovine, G.; Poesen, J.; Terranova, O.

    2012-04-01

    Among the physical processes leading to land degradation, soil erosion by water is the most important and gully erosion may contribute, at places, to 70% of the total soil loss. Nevertheless, gully erosion has often been neglected in water soil erosion modeling, whilst more prominence has been given to rill and interrill erosion. Both to facilitate the processing by agricultural machinery and to take advantage of all the arable land, gullies are commonly removed at each crop cycle, with significant soil losses due to the repeated excavation of the channel by the successive rainstorm. When the erosive forces of overland flow exceed the strength of the soil particles to detachment and displacement, water erosion occurs and usually a channel is formed. As runoff is proportional to the local catchment area, a relationship between local slope, S, and contributing area, A, is supposed to exists. A "geomorphologic threshold" scheme is therefore suitable to interpret the physical process of gully initiation: accordingly, a gully is formed when a hydraulic threshold for incision exceeds the resistance of the soil particles to detachment and transport. Similarly, it appears reasonable that a gully ends when there is a reduction of slope, or the concentrated flow meets more resistant soil-vegetation complexes. This study aims to predict the location of the beginning of gullies in the Mediterranean environment, based on an evaluation of S and A by means of a mathematical model. For the identification of the areas prone to gully erosion, the model employs two empirical thresholds relevant to the head (Thead) and to the end (Tend) of the gullies (of the type SA^ b>Thead, SA^ bthresholds represent the resistance of the environment to gully erosion, depending on: stoniness, vegetation cover, propensity to tunneling erosion due to soil dispersibility in water, and the intrinsic characteristics of the eroded material and of the erosivity of the rainfall event. Such

  6. Genetic evaluation of calf and heifer survival in Iranian Holstein cattle using linear and threshold models.

    PubMed

    Forutan, M; Ansari Mahyari, S; Sargolzaei, M

    2015-02-01

    Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection.

  7. Threshold Models for Genome-Enabled Prediction of Ordinal Categorical Traits in Plant Breeding

    PubMed Central

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de los Campos, Gustavo; Eskridge, Kent; Crossa, José

    2014-01-01

    Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9–14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. PMID:25538102

  8. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.

    PubMed

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878

  9. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures

    PubMed Central

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.

  10. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures

    PubMed Central

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878

  11. Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.

  12. Reentry Near the Percolation Threshold in a Heterogeneous Discrete Model for Cardiac Tissue

    NASA Astrophysics Data System (ADS)

    Alonso, Sergio; Bär, Markus

    2013-04-01

    Arrhythmias in cardiac tissue are related to irregular electrical wave propagation in the heart. Cardiac tissue is formed by a discrete cell network, which is often heterogeneous. A localized region with a fraction of nonconducting links surrounded by homogeneous conducting tissue can become a source of reentry and ectopic beats. Extensive simulations in a discrete model of cardiac tissue show that a wave crossing a heterogeneous region of cardiac tissue can disintegrate into irregular patterns, provided the fraction of nonconducting links is close to the percolation threshold of the cell network. The dependence of the reentry probability on this fraction, the system size, and the degree of excitability can be inferred from the size distribution of nonconducting clusters near the percolation threshold.

  13. A piecewise model of virus-immune system with two thresholds.

    PubMed

    Tang, Biao; Xiao, Yanni; Wu, Jianhong

    2016-08-01

    The combined antiretroviral therapy with interleukin (IL)-2 treatment may not be enough to preclude exceptionally high growth of HIV virus nor rebuilt the HIV-specific CD4 or CD8 T-cell proliferative immune response for management of HIV infected patients. Whether extra inclusion of immune therapy can induce the HIV-specific immune response and control HIV replication remains challenging. Here a piecewise virus-immune model with two thresholds is proposed to represent the HIV-1 RNA and effector cell-guided therapy strategies. We first analyze the dynamics of the virus-immune system with effector cell-guided immune therapy only and prove that there exists a critical level of the intensity of immune therapy determining whether the HIV-1 RAN virus loads can be controlled below a relative low level. Our analysis of the global dynamics of the proposed model shows that the pseudo-equilibrium can be globally stable or locally bistable with order 1 periodic solution or bistable with the virus-free periodic solution under various appropriate conditions. This indicates that HIV viral loads can either be eradicated or stabilize at a previously given level or go to infinity (corresponding to the effector cells oscillating), depending on the threshold levels and the initial HIV virus loads and effector cell counts. Comparing with the single threshold therapy strategy we obtain that with two thresholds therapy strategies either virus can be eradicated or the controllable region, where HIV viral loads can be maintained below a certain value, can be enlarged.

  14. Analytical unified threshold voltage model of short-channel FinFETs and implementation

    NASA Astrophysics Data System (ADS)

    Fasarakis, N.; Tsormpatzoglou, A.; Tassis, D. H.; Dimitriadis, C. A.; Papathanasiou, K.; Jomaah, J.; Ghibaudo, G.

    2011-10-01

    In this work, an analytical compact model for the threshold voltage V t of double-gate (DG) and tri-gate (TG) FinFETs is proposed. The DG FinFET V t model is extended to TG FinFET V t model using effective parameters capturing the electrostatic control of the top gate over the short-channel effects. The results of the model are compared with the results of a numerical device simulator for a wide range of the channel length, the fin height and the fin width. The overall results reveal the very good accuracy of the proposed model. The V t model has been validated by developing a Verilog-A code and comparing the results derived by the Spectre simulator and the Verilog-A code with simulation results.

  15. Electric Field Model of Transcranial Electric Stimulation in Nonhuman Primates: Correspondence to Individual Motor Threshold

    PubMed Central

    Lee, Won Hee; Lisanby, Sarah H.; Laine, Andrew F.

    2015-01-01

    Objective To develop a pipeline for realistic head models of nonhuman primates (NHPs) for simulations of noninvasive brain stimulation, and use these models together with empirical threshold measurements to demonstrate that the models capture individual anatomical variability. Methods Based on structural MRI data, we created models of the electric field (E-field) induced by right unilateral (RUL) electroconvulsive therapy (ECT) in four rhesus macaques. Individual motor threshold (MT) was measured with transcranial electric stimulation (TES) administered through the RUL electrodes in the same subjects. Results The interindividual anatomical differences resulted in 57% variation in median E-field strength in the brain at fixed stimulus current amplitude. Individualization of the stimulus current by MT reduced the E-field variation in the target motor area by 27%. There was significant correlation between the measured MT and the ratio of simulated electrode current and E-field strength (r2 = 0.95, p = 0.026). Exploratory analysis revealed significant correlations of this ratio with anatomical parameters including of the superior electrode-to-cortex distance, vertex-to-cortex distance, and brain volume (r2 > 0.96, p < 0.02). The neural activation threshold was estimated to be 0.45 ± 0.07 V/cm for 0.2 ms stimulus pulse width. Conclusion These results suggest that our individual-specific NHP E-field models appropriately capture individual anatomical variability relevant to the dosing of TES/ECT. These findings are exploratory due to the small number of subjects. Significance This work can contribute insight in NHP studies of ECT and other brain stimulation interventions, help link the results to clinical studies, and ultimately lead to more rational brain stimulation dosing paradigms. PMID:25910001

  16. Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.

    PubMed

    Ottino-Löffler, Bertrand; Strogatz, Steven H

    2016-06-01

    We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics. PMID:27415267

  17. Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold

    NASA Astrophysics Data System (ADS)

    Ottino-Löffler, Bertrand; Strogatz, Steven H.

    2016-06-01

    We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N , is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N , the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N ≫1 . The leading correction to the infinite-N result scales like either N-3 /2 or N-1, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005), 10.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.

  18. Impact of slow K(+) currents on spike generation can be described by an adaptive threshold model.

    PubMed

    Kobayashi, Ryota; Kitano, Katsunori

    2016-06-01

    A neuron that is stimulated by rectangular current injections initially responds with a high firing rate, followed by a decrease in the firing rate. This phenomenon is called spike-frequency adaptation and is usually mediated by slow K(+) currents, such as the M-type K(+) current (I M ) or the Ca(2+)-activated K(+) current (I AHP ). It is not clear how the detailed biophysical mechanisms regulate spike generation in a cortical neuron. In this study, we investigated the impact of slow K(+) currents on spike generation mechanism by reducing a detailed conductance-based neuron model. We showed that the detailed model can be reduced to a multi-timescale adaptive threshold model, and derived the formulae that describe the relationship between slow K(+) current parameters and reduced model parameters. Our analysis of the reduced model suggests that slow K(+) currents have a differential effect on the noise tolerance in neural coding. PMID:27085337

  19. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    PubMed Central

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-01-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830

  20. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    NASA Astrophysics Data System (ADS)

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-12-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails.

  1. Stochastic Threshold Microdose Model for Cell Killing by Insoluble Metallic Nanomaterial Particles

    PubMed Central

    Scott, Bobby R.

    2010-01-01

    This paper introduces a novel microdosimetric model for metallic nanomaterial-particles (MENAP)-induced cytotoxicity. The focus is on the engineered insoluble MENAP which represent a significant breakthrough in the design and development of new products for consumers, industry, and medicine. Increased production is rapidly occurring and may cause currently unrecognized health effects (e.g., nervous system dysfunction, heart disease, cancer); thus, dose-response models for MENAP-induced biological effects are needed to facilitate health risk assessment. The stochastic threshold microdose (STM) model presented introduces novel stochastic microdose metrics for use in constructing dose-response relationships for the frequency of specific cellular (e.g., cell killing, mutations, neoplastic transformation) or subcellular (e.g., mitochondria dysfunction) effects. A key metric is the exposure-time-dependent, specific burden (MENAP count) for a given critical target (e.g., mitochondria, nucleus). Exceeding a stochastic threshold specific burden triggers cell death. For critical targets in the cytoplasm, the autophagic mode of death is triggered. For the nuclear target, the apoptotic mode of death is triggered. Overall cell survival is evaluated for the indicated competing modes of death when both apply. The STM model can be applied to cytotoxicity data using Bayesian methods implemented via Markov chain Monte Carlo. PMID:21191483

  2. Effects of temporal correlations on cascades: Threshold models on temporal networks

    NASA Astrophysics Data System (ADS)

    Backlund, Ville-Pekka; Saramäki, Jari; Pan, Raj Kumar

    2014-06-01

    A person's decision to adopt an idea or product is often driven by the decisions of peers, mediated through a network of social ties. A common way of modeling adoption dynamics is to use threshold models, where a node may become an adopter given a high enough rate of contacts with adopted neighbors. We study the dynamics of threshold models that take both the network topology and the timings of contacts into account, using empirical contact sequences as substrates. The models are designed such that adoption is driven by the number of contacts with different adopted neighbors within a chosen time. We find that while some networks support cascades leading to network-level adoption, some do not: the propagation of adoption depends on several factors from the frequency of contacts to burstiness and timing correlations of contact sequences. More specifically, burstiness is seen to suppress cascade sizes when compared to randomized contact timings, while timing correlations between contacts on adjacent links facilitate cascades.

  3. Computationally Efficient Implementation of a Novel Algorithm for the General Unified Threshold Model of Survival (GUTS)

    PubMed Central

    Albert, Carlo; Vogel, Sören

    2016-01-01

    The General Unified Threshold model of Survival (GUTS) provides a consistent mathematical framework for survival analysis. However, the calibration of GUTS models is computationally challenging. We present a novel algorithm and its fast implementation in our R package, GUTS, that help to overcome these challenges. We show a step-by-step application example consisting of model calibration and uncertainty estimation as well as making probabilistic predictions and validating the model with new data. Using self-defined wrapper functions, we show how to produce informative text printouts and plots without effort, for the inexperienced as well as the advanced user. The complete ready-to-run script is available as supplemental material. We expect that our software facilitates novel re-analysis of existing survival data as well as asking new research questions in a wide range of sciences. In particular the ability to quickly quantify stressor thresholds in conjunction with dynamic compensating processes, and their uncertainty, is an improvement that complements current survival analysis methods. PMID:27340823

  4. Evaluating intercepts from demographic models to understand resource limitation and resource thresholds

    USGS Publications Warehouse

    Reynolds-Hogland, M. J.; Hogland, J.S.; Mitchell, M.S.

    2008-01-01

    Understanding resource limitation is critical to effective management and conservation of wild populations, however resource limitation is difficult to quantify partly because resource limitation is a dynamic process. Specifically, a resource that is limiting at one time may become non-limiting at another time, depending upon changes in its availability and changes in the availability of other resources. Methods for understanding resource limitation, therefore, must consider the dynamic effects of resources on demography. We present approaches for interpreting results of demographic modeling beyond analyzing model rankings, model weights, slope estimates, and model averaging. We demonstrate how interpretation of y-intercepts, odds ratios, and rates of change can yield insights into resource limitation as a dynamic process, assuming logistic regression is used to link estimates of resources with estimates of demography. In addition, we show how x-intercepts can be evaluated with respect to odds ratios to understand resource thresholds. ?? 2007 Elsevier B.V. All rights reserved.

  5. Exact gravitational threshold correction in the Ferrara-Harvey-Strominger-Vafa model

    NASA Astrophysics Data System (ADS)

    Harvey, Jeffrey A.; Moore, Gregory

    1998-02-01

    We consider the automorphic forms which govern the gravitational threshold correction F1 in models of heterotic-type-IIA duality with N=2 supersymmetry in four dimensions. In particular we derive the full nonperturbative formula for F1 for the dual pair originally considered by Ferrara, Harvey, Strominger, and Vafa. The answer involves an interesting automorphic product constructed by Borcherds which is associated with the ``fake monster Lie superalgebra.'' As an application of this result we rederive a result of Jorgenson and Todorov on determinants of ∂¯ operators on K3 surfaces.

  6. Modeling of surface thermodynamics and damage thresholds in the IR and THz regime

    NASA Astrophysics Data System (ADS)

    Clark, C. D., III; Thomas, Robert J.; Maseberg, Paul D. S.; Buffington, Gavin D.; Irvin, Lance J.; Stolarski, Jacob; Rockwell, Benjamin A.

    2007-02-01

    The Air Force Research Lab has developed a configurable, two-dimensional, thermal model to predict laser-tissue interactions, and to aid in predictive studies for safe exposure limits. The model employs a finite-difference, time-dependent method to solve the two-dimensional cylindrical heat equation (radial and axial) in a biological system construct. Tissues are represented as multi-layer structures, with optical and thermal properties defined for each layer, are homogeneous throughout the layer. Multiple methods for computing the source term for the heat equation have been implemented, including simple linear absorption definitions and full beam propagation through finite-difference methods. The model predicts the occurrence of thermal damage sustained by the tissue, and can also determine damage thresholds for total optical power delivered to the tissue. Currently, the surface boundary conditions incorporate energy loss through free convection, surface radiation, and evaporative cooling. Implementing these boundary conditions is critical for correctly calculating the surface temperature of the tissue, and, therefore, damage thresholds. We present an analysis of the interplay between surface boundary conditions, ambient conditions, and blood perfusion within tissues.

  7. Threshold conditions for integrated pest management models with pesticides that have residual effects.

    PubMed

    Tang, Sanyi; Liang, Juhua; Tan, Yuanshun; Cheke, Robert A

    2013-01-01

    Impulsive differential equations (hybrid dynamical systems) can provide a natural description of pulse-like actions such as when a pesticide kills a pest instantly. However, pesticides may have long-term residual effects, with some remaining active against pests for several weeks, months or years. Therefore, a more realistic method for modelling chemical control in such cases is to use continuous or piecewise-continuous periodic functions which affect growth rates. How to evaluate the effects of the duration of the pesticide residual effectiveness on successful pest control is key to the implementation of integrated pest management (IPM) in practice. To address these questions in detail, we have modelled IPM including residual effects of pesticides in terms of fixed pulse-type actions. The stability threshold conditions for pest eradication are given. Moreover, effects of the killing efficiency rate and the decay rate of the pesticide on the pest and on its natural enemies, the duration of residual effectiveness, the number of pesticide applications and the number of natural enemy releases on the threshold conditions are investigated with regard to the extent of depression or resurgence resulting from pulses of pesticide applications and predator releases. Latin Hypercube Sampling/Partial Rank Correlation uncertainty and sensitivity analysis techniques are employed to investigate the key control parameters which are most significantly related to threshold values. The findings combined with Volterra's principle confirm that when the pesticide has a strong effect on the natural enemies, repeated use of the same pesticide can result in target pest resurgence. The results also indicate that there exists an optimal number of pesticide applications which can suppress the pest most effectively, and this may help in the design of an optimal control strategy.

  8. Threshold conditions for integrated pest management models with pesticides that have residual effects.

    PubMed

    Tang, Sanyi; Liang, Juhua; Tan, Yuanshun; Cheke, Robert A

    2013-01-01

    Impulsive differential equations (hybrid dynamical systems) can provide a natural description of pulse-like actions such as when a pesticide kills a pest instantly. However, pesticides may have long-term residual effects, with some remaining active against pests for several weeks, months or years. Therefore, a more realistic method for modelling chemical control in such cases is to use continuous or piecewise-continuous periodic functions which affect growth rates. How to evaluate the effects of the duration of the pesticide residual effectiveness on successful pest control is key to the implementation of integrated pest management (IPM) in practice. To address these questions in detail, we have modelled IPM including residual effects of pesticides in terms of fixed pulse-type actions. The stability threshold conditions for pest eradication are given. Moreover, effects of the killing efficiency rate and the decay rate of the pesticide on the pest and on its natural enemies, the duration of residual effectiveness, the number of pesticide applications and the number of natural enemy releases on the threshold conditions are investigated with regard to the extent of depression or resurgence resulting from pulses of pesticide applications and predator releases. Latin Hypercube Sampling/Partial Rank Correlation uncertainty and sensitivity analysis techniques are employed to investigate the key control parameters which are most significantly related to threshold values. The findings combined with Volterra's principle confirm that when the pesticide has a strong effect on the natural enemies, repeated use of the same pesticide can result in target pest resurgence. The results also indicate that there exists an optimal number of pesticide applications which can suppress the pest most effectively, and this may help in the design of an optimal control strategy. PMID:22205243

  9. Experimental confirmation of the polygyny threshold model for red-winged blackbirds.

    PubMed

    Pribil, S; Searcy, W A

    2001-08-01

    The polygyny threshold model assumes that polygynous mating is costly to females and proposes that females pay the cost of polygyny only when compensated by obtaining a superior territory or male. We present, to the authors' knowledge, the first experimental field test to demonstrate that females trade mating status against territory quality as proposed by this hypothesis. Previous work has shown that female red-winged blackbirds (Agelaius phoeniceus) in Ontario prefer settling with unmated males and that this preference is adaptive because polygynous mating status lowers female reproductive success. Other evidence suggests that nesting over water increases the reproductive success of female red-winged blackbirds. Here we describe an experiment in which females were given choices between two adjacent territories, one owned by an unmated male without any over-water nesting sites and the other by an already-mated male with over-water sites. Females overwhelmingly preferred the already-mated males, demonstrating that superior territory quality can reverse preferences based on mating status and supporting the polygyny threshold model as the explanation for polygyny in this population.

  10. Landslide triggering rainfall thresholds estimation using hydrological modelling of catchments in the Ialomita Subcarpathians, Romania

    NASA Astrophysics Data System (ADS)

    Chitu, Zenaida; Busuioc, Aristita; Burcea, Sorin; Sandric, Ionut

    2016-04-01

    This work focuses on the hydro-meteorological analysis for landslide triggering rainfall thresholds estimation in the Ialomita Subcarpathians. This specific area is a complex geological and geomorphic unit in Romania, affected by landslides that produce numerous damages to the infrastructure every few years (1997, 1998, 2005, 2006, 2010, 2012 and 2014). Semi-distributed ModClark hydrological model implemented in HEC HMS software that integrates radar rainfall data, was used to investigate hydrological conditions within the catchment responsible for the occurrence of landslides during the main rainfall events. Statistical analysis of the main hydro-meteorological variables during the landslide events that occurred between 2005 and 2014 was carried out in order to identify preliminary rainfall thresholds for landslides in the Ialomita Subcarpathians. Moreover, according to the environmental catchment characteristics, different hydrological behaviors could be identified based on the spatially distributed rainfall estimates from weather radar data. Two hydrological regimes in the catchments were distinguished: one dominated by direct flow that explains the landslides that occurred due to slope undercutting and one characterized by high soil water storage during prolonged rainfall and therefore where subsurface runoff is significant. The hydrological precipitation-discharge modelling of the catchment in the Ialomita Subcarpathians, in which landslides occurred, helped understanding the landslide triggering and as such can be of added value for landslide research.

  11. Temperature thresholds and degree-day model for Marmara gulosa (Lepidoptera: Gracillariidae).

    PubMed

    O'Neal, M J; Headrick, D H; Montez, Gregory H; Grafton-Cardwell, E E

    2011-08-01

    The developmental thresholds for Marmara gulosa Guillén & Davis (Lepidoptera: Gracillariidae) were investigated in the laboratory by using 17, 21, 25, 29, and 33 degrees C. The lowest mortality occurred in cohorts exposed to 25 and 29 degrees C. Other temperatures caused >10% mortality primarily in egg and first and second instar sap-feeding larvae. Linear regression analysis approximated the lower developmental threshold at 12.2 degrees C. High mortality and slow developmental rate at 33 degrees C indicate the upper developmental threshold is near this temperature. The degree-day (DD) model indicated that a generation requires an accumulation of 322 DD for development from egg to adult emergence. Average daily temperatures in the San Joaquin Valley could produce up to seven generations of M. gulosa per year. Field studies documented two, five, and three overlapping generations of M. gulosa in walnuts (Juglans regia L.; Juglandaceae), pummelos (Citrus maxima (Burm.) Merr.; Rutaceae), and oranges (Citrus sinensis (L.) Osbeck; Rutaceae), for a total of seven observed peelminer generations. Degree-day units between generations averaged 375 DD for larvae infesting walnut twigs; however, availability of green wood probably affected timing of infestations. Degree-day units between larval generations averaged 322 for pummelos and 309 for oranges, confirming the laboratory estimation. First infestation of citrus occurred in June in pummelo fruit and August in orange fruit when fruit neared 60 mm in diameter. Fruit size and degree-day units could be used as management tools to more precisely time insecticide treatments to target the egg stage and prevent rind damage to citrus. Degree-day units also could be used to more precisely time natural enemy releases to target larval instars that are preferred for oviposition.

  12. Near threshold conditions justify critical gradient model for Alvenic mode driven relaxation of fast ions

    NASA Astrophysics Data System (ADS)

    Gorelenkov, Nikolai; Ghantous, Katy; Heidbrink, William; van Zeeland, Michael

    2013-10-01

    Future burning plasma performance will be limited by the constraints to confine energetic superalfvenic fusion products, which can drive several low frequency Alfvénic instabilities. Expected multiple resonances help to justify the model developed recently, called critical gradient or 1.5D reduced quasilinear diffusion model. Similar conditions are expected in burning plasmas with TAE instabilities in a non virulent nonlinear regime. The 1.5D model make use of TAE/RSAEs linear theory. One critical element of the presented model is that it requires averaging over the time comparable to the fast ion slowing down. Another element is that the fast ion diffusion near the resonance does not flatten the distribution function whose gradient is maintained by the collision scattering. Further validations of this model justify its use in case of relatively high collisions. With the parametric plasma dependencies embedded in the model and with the quantitative normalization to NOVA-K growth rates the 1.5D model application to DIII-D experiments is well positioned for validations. Good agreement is summarized here for absolute values of the deduced neutron rate and for the time behavior of fast ion losses near the AE activity thresholds. 1.5D model is applicable for ITER and other BPs. Supported in part by the U.S. Department of Energy under the contract DE-AC02-09CH11466.

  13. Catastrophic shifts and lethal thresholds in a propagating front model of unstable tumor progression.

    PubMed

    Amor, Daniel R; Solé, Ricard V

    2014-08-01

    Unstable dynamics characterizes the evolution of most solid tumors. Because of an increased failure of maintaining genome integrity, a cumulative increase in the levels of gene mutation and loss is observed. Previous work suggests that instability thresholds to cancer progression exist, defining phase transition phenomena separating tumor-winning scenarios from tumor extinction or coexistence phases. Here we present an integral equation approach to the quasispecies dynamics of unstable cancer. The model exhibits two main phases, characterized by either the success or failure of cancer tissue. Moreover, the model predicts that tumor failure can be due to either a reduced selective advantage over healthy cells or excessive instability. We also derive an approximate, analytical solution that predicts the front speed of aggressive tumor populations on the instability space.

  14. A comparison of signal detection theory to the objective threshold/strategic model of unconscious perception.

    PubMed

    Haase, Steven J; Fisk, Gary D

    2011-08-01

    A key problem in unconscious perception research is ruling out the possibility that weak conscious awareness of stimuli might explain the results. In the present study, signal detection theory was compared with the objective threshold/strategic model as explanations of results for detection and identification sensitivity in a commonly used unconscious perception task. In the task, 64 undergraduate participants detected and identified one of four briefly displayed, visually masked letters. Identification was significantly above baseline (i.e., proportion correct > .25) at the highest detection confidence rating. This result is most consistent with signal detection theory's continuum of sensory states and serves as a possible index of conscious perception. However, there was limited support for the other model in the form of a predicted "looker's inhibition" effect, which produced identification performance that was significantly below baseline. One additional result, an interaction between the target stimulus and type of mask, raised concerns for the generality of unconscious perception effects.

  15. NN-->NNπ reaction near threshold in a covariant one-boson-exchange model

    NASA Astrophysics Data System (ADS)

    Shyam, R.; Mosel, U.

    1998-04-01

    We calculate the cross sections for the p(p,nπ+)p and p(p,pπ0)p reactions for proton beam energies near threshold in a covariant one-boson-exchange model, which incorporates the exchange of π, ρ, σ and ω mesons, treats both nucleon and delta isobar as intermediate states. The final state interaction effects are included within the Watson's theory. Within this model the ω and σ meson exchange terms contribute significantly at these energies, which, along with other meson exchanges, make it possible to reproduce the available experimental data for the total as well as differential cross sections for both the reactions. The cross sections at beam energies <=300 MeV are found to be almost free from the contributions of the Δ isobar excitation.

  16. Probabilistic transport models for plasma transport in the presence of critical thresholds: Beyond the diffusive paradigma)

    NASA Astrophysics Data System (ADS)

    Sánchez, R.; van Milligen, B. Ph.; Carreras, B. A.

    2005-05-01

    It is argued that the modeling of plasma transport in tokamaks may benefit greatly from extending the usual local paradigm to accommodate scale-free transport mechanisms. This can be done by combining Lévy distributions and a nonlinear threshold condition within the continuous time random walk concept. The advantages of this nonlocal, nonlinear extension are illustrated by constructing a simple particle density transport model that, as a result of these ideas, spontaneously exhibits much of nondiffusive phenomenology routinely observed in tokamaks. The fluid limit of the system shows that the kind of equations that are appropriate to capture these dynamics are based on fractional differential operators. In them, effective diffusivities and pinch velocities are found that are dynamically set by the system in response to the specific characteristics of the fueling source and external perturbations. This fact suggests some dramatic consequences for the extrapolation of these transport properties to larger size systems.

  17. A population-based model of the nonlinear dynamics of the thalamocortical feedback network displays intrinsic oscillations in the spindling (7-14 Hz) range.

    PubMed

    Yousif, Nada A B; Denham, Michael

    2005-12-01

    The thalamocortical network is modelled using the Wilson-Cowan equations for neuronal population activity. We show that this population model with biologically derived parameters possesses intrinsic nonlinear oscillatory dynamics, and that the frequency of oscillation lies within the spindle range. Spindle oscillations are an early sleep oscillation characterized by high-frequency bursts of action potentials followed by a period of quiescence, at a frequency of 7-14 Hz. Spindles are generally regarded as being generated by intrathalamic circuitry, as decorticated thalamic slices and the isolated thalamic reticular nucleus exhibit spindles. However, the role of cortical feedback has been shown to regulate and synchronize the oscillation. Previous modelling studies have mainly used conductance-based models and hence the mechanism relied upon the inclusion of ionic currents, particularly the T-type calcium current. Here we demonstrate that spindle-frequency oscillatory activity can also arise from the nonlinear dynamics of the thalamocortical circuit, and we use bifurcation analysis to examine the robustness of this oscillation in terms of the functional range of the parameters used in the model. The results suggest that the thalamocortical circuit has intrinsic nonlinear population dynamics which are capable of providing robust support for oscillatory activity within the frequency range of spindle oscillations.

  18. Binary threshold networks as a natural null model for biological networks

    NASA Astrophysics Data System (ADS)

    Rybarsch, Matthias; Bornholdt, Stefan

    2012-08-01

    Spin models of neural networks and genetic networks are considered elegant as they are accessible to statistical mechanics tools for spin glasses and magnetic systems. However, the conventional choice of variables in spin systems may cause problems in some models when parameter choices are unrealistic from a biological perspective. Obviously, this may limit the role of a model as a template model for biological systems. Perhaps less obviously, also ensembles of random networks are affected and may exhibit different critical properties. We consider here a prototypical network model that is biologically plausible in its local mechanisms. We study a discrete dynamical network with two characteristic properties: Nodes with binary states 0 and 1, and a modified threshold function with Θ0(0)=0. We explore the critical properties of random networks of such nodes and find a critical connectivity Kc=2.0 with activity vanishing at the critical point. Finally, we observe that the present model allows a more natural implementation of recent models of budding yeast and fission yeast cell-cycle control networks.

  19. Binary threshold networks as a natural null model for biological networks.

    PubMed

    Rybarsch, Matthias; Bornholdt, Stefan

    2012-08-01

    Spin models of neural networks and genetic networks are considered elegant as they are accessible to statistical mechanics tools for spin glasses and magnetic systems. However, the conventional choice of variables in spin systems may cause problems in some models when parameter choices are unrealistic from a biological perspective. Obviously, this may limit the role of a model as a template model for biological systems. Perhaps less obviously, also ensembles of random networks are affected and may exhibit different critical properties. We consider here a prototypical network model that is biologically plausible in its local mechanisms. We study a discrete dynamical network with two characteristic properties: Nodes with binary states 0 and 1, and a modified threshold function with Θ(0)(0)=0. We explore the critical properties of random networks of such nodes and find a critical connectivity K(c)=2.0 with activity vanishing at the critical point. Finally, we observe that the present model allows a more natural implementation of recent models of budding yeast and fission yeast cell-cycle control networks. PMID:23005832

  20. Can we clinically recognize a vascular depression? The role of personality in an expanded threshold model.

    PubMed

    Turk, Bela R; Gschwandtner, Michael E; Mauerhofer, Michaela; Löffler-Stastka, Henriette

    2015-05-01

    The vascular depression (VD) hypothesis postulates that cerebrovascular disease may "predispose, precipitate, or perpetuate" a depressive syndrome in elderly patients. Clinical presentation of VD has been shown to differ to major depression in quantitative disability; however, as little research has been made toward qualitative phenomenological differences in the personality aspects of the symptom profile, clinical diagnosis remains a challenge.We attempted to identify differences in clinical presentation between depression patients (n = 50) with (n = 25) and without (n = 25) vascular disease using questionnaires to assess depression, affect regulation, object relations, aggressiveness, alexithymia, personality functioning, personality traits, and counter transference.We were able to show that patients with vascular dysfunction and depression exhibit significantly higher aggressive and auto-aggressive tendencies due to a lower tolerance threshold. These data indicate that VD is a separate clinical entity and secondly that the role of personality itself may be a component of the disease process. We propose an expanded threshold disease model incorporating personality functioning and mood changes. Such findings might also aid the development of a screening program, by serving as differential criteria, ameliorating the diagnostic procedure. PMID:25950684

  1. A model for calculating the threshold for shock initiation of pyrotechnics and explosives

    SciTech Connect

    Maiden, D.E.

    1987-03-01

    A model is proposed for predicting the shock pressure P and pulse pulse width ..pi.. required to ignite porous reactive mixtures. Essentially, the shock wave collapses the voids, forming high-temperature hot spots that ignite the mixture. The pore temperature is determined by numerical solution of the equations of motion, viscoplastic heating, and heat conduction. The pore radius is determined as a function of the pore size, viscosity, yield stress, and pressure. Temperature-dependent material properties and melting are considered. Ignition occurs when the surface temperature of the pore reaches the critical hot-spot temperature for thermal runaway. Data from flyer-plate impact experiments were analyzed and the pressure pulse at the ignition threshold was determined for 2Al/Fe/sub 2/O/sub 3/ (thermite) and the high explosives TATB, PBX 9404, and PETN. Mercury intrusion porosimetry was performed on the samples and the pore size distribution determined. Theoretical and numerical predictions of the ignition threshold are compared with experiment. Results show that P/sup 2/..pi.. appears to be an initiation property of the material.

  2. The minimal SUSY B - L model: simultaneous Wilson lines and string thresholds

    NASA Astrophysics Data System (ADS)

    Deen, Rehan; Ovrut, Burt A.; Purves, Austin

    2016-07-01

    In previous work, we presented a statistical scan over the soft supersymmetry breaking parameters of the minimal SUSY B - L model. For specificity of calculation, unification of the gauge parameters was enforced by allowing the two Z_3× Z_3 Wilson lines to have mass scales separated by approximately an order of magnitude. This introduced an additional "left-right" sector below the unification scale. In this paper, for three important reasons, we modify our previous analysis by demanding that the mass scales of the two Wilson lines be simultaneous and equal to an "average unification" mass < M U >. The present analysis is 1) more "natural" than the previous calculations, which were only valid in a very specific region of the Calabi-Yau moduli space, 2) the theory is conceptually simpler in that the left-right sector has been removed and 3) in the present analysis the lack of gauge unification is due to threshold effects — particularly heavy string thresholds, which we calculate statistically in detail. As in our previous work, the theory is renormalization group evolved from < M U > to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ˜125 GeV. The subspace of soft supersymmetry breaking masses that satisfies all such constraints is presented and shown to be substantial.

  3. Thresholds in Atmosphere-Soil Moisture Interactions: Results from Climate Model Studies

    NASA Technical Reports Server (NTRS)

    Oglesby, Robert J.; Marshall, Susan; Erickson, David J., III; Roads, John O.; Robertson, Franklin R.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The potential predictability of the effects of warm season soil moisture anomalies over the central U.S. has been investigated using a series of GCM (Global Climate Model) experiments with the NCAR (National Center for Atmospheric Research) CCM3 (Community Climate Model version 3)/LSM (Land Surface Model). Three different types of experiments have been made, all starting in either March (representing precursor conditions) or June (conditions at the onset of the warm season): (1) 'anomaly' runs with large, exaggerated initial soil moisture reductions, aimed at evaluating the physical mechanisms by which soil moisture can affect the atmosphere; (2) 'predictability' runs aimed at evaluating whether typical soil moisture initial anomalies (indicative of year-to-year variability) can have a significant effect, and if so, for how long; (3) 'threshold' runs aimed at evaluating if a soil moisture anomaly must be of a specific size (i.e., a threshold crossed) before a significant impact on the atmosphere is seen. The 'anomaly' runs show a large, long-lasting response in soil moisture and also quantities such as surface temperature, sea level pressure, and precipitation; effects persist for at least a year. The 'predictability' runs, on the other hand, show very little impact of the initial soil moisture anomalies on the subsequent evolution of soil moisture and other atmospheric parameters; internal variability is most important, with the initial state of the atmosphere (representing remote effects such as SST anomalies) playing a more minor role. The 'threshold' runs, devised to help resolve the dichotomy in 'anomaly' and 'predictability' results, suggest that, at least in CCM3/LSM, the vertical profile of soil moisture is the most important factor, and that deep soil zone anomalies exert a more powerful, long-lasting effect than do anomalies in the near surface soil zone. We therefore suggest that soil moisture feedbacks may be more important in explaining prolonged

  4. Numerical modeling of rainfall thresholds for shallow landsliding in the Seattle, Washington, area

    USGS Publications Warehouse

    Godt, Jonathan W.; McKenna, Jonathan P.

    2008-01-01

    The temporal forecasting of landslide hazard has typically relied on empirical relations between rainfall characteristics and landslide occurrence to identify conditions that may cause shallow landslides. Here, we describe an alternate, deterministic approach to define rainfall thresholds for landslide occurrence in the Seattle, Washington, area. This approach combines an infinite slope-stability model with a variably saturated flow model to determine the rainfall intensity and duration that leads to shallow failure of hillside colluvium. We examine the influence of variation in particle-size distribution on the unsaturated hydraulic properties of the colluvium by performing capillary-rise tests on glacial outwash sand and three experimental soils with increasing amounts of fine-grained material. Observations of pore-water response to rainfall collected as part of a program to monitor the near-surface hydrology of steep coastal bluffs along Puget Sound were used to test the numerical model results and in an inverse modeling procedure to determine the in situ hydraulic properties. Modeling results are given in terms of a destabilizing rainfall intensity and duration, and comparisons with empirical observations of landslide occurrence and triggering rainfall indicate that the modeling approach may be useful for forecasting landslide occurrence.

  5. Factors relating to poor survival rates of aged cervical cancer patients: a population-based study with the relative survival model in Osaka, Japan.

    PubMed

    Ioka, Akiko; Ito, Yuri; Tsukuma, Hideaki

    2009-01-01

    Poor survival of older cervical cancer patients has been reported; however, related factors, such as the extent of disease and the competitive risk by aging have not been well evaluated. We applied the relative survival model developed by Dickman et al to resolve this issue. Study subjects were cervical cancer patients retrieved from the Osaka Cancer Registry. They were limited to the 10,048 reported cases diagnosed from 1975 to 1999, based on the quality of data collection on vital status. Age at diagnosis was categorized into <30, 30-54, 55-64, and > or = 65 years. The impact of prognostic factors on 5-year survival was evaluated with the relative survival model, incorporating patients' expected survival in multivariate analysis. The age-specific relative excess risk (RER) of death was significantly higher for older groups as compared with women aged 30-54 years (RER, 1.58 at 55-64 and 2.51 at > or = 65 years). The RER was decreased by 64.8% among the 55-64 year olds as an effect of cancer stage at diagnosis, and by 43.4% among those 65 years old and over. After adding adjustment for treatment modalities, the RER was no longer significantly higher among 55-64 year olds; however, it was still higher among 65 year olds and over. Advanced stage at diagnosis was the main determinant of poor survival among the aged cervical cancer patients, although other factors such as limitations on the combination of treatment were also suggested to have an influence in those aged 65 years and over.

  6. Electrodynamic model of the field effect transistor application for THz/subTHz radiation detection: Subthreshold and above threshold operation

    SciTech Connect

    Dobrovolsky, V.

    2014-10-21

    Developed in this work is an electrodynamic model of field effect transistor (FET) application for THz/subTHz radiation detection. It is based on solution of the Maxwell equations in the gate dielectric, expression for current in the channel, which takes into account both the drift and diffusion current components, and the equation of current continuity. For the regimes under and above threshold at the strong inversion the response voltage, responsivity, wave impedance, power of ohmic loss in the gate and channel have been found, and the electrical noise equivalent power (ENEP) has been estimated. The responsivity is orders of magnitude higher and ENEP under threshold is orders of magnitude less than these values above threshold. Under the threshold, the electromagnetic field in the gate oxide is identical to field of the plane waves in free-space. At the same time, for strong inversion the charging of the gate capacitance through the resistance of channel determines the electric field in oxide.

  7. History, development, and future of the progressively lowered stress threshold: a conceptual model for dementia care.

    PubMed

    Smith, Marianne; Gerdner, Linda A; Hall, Geri R; Buckwalter, Kathleen C

    2004-10-01

    Behavioral symptoms associated with dementia are a major concern for the person who experiences them and for caregivers who supervise, support, and assist them. The knowledge and skill of formal and informal caregivers affects the quality of care they can provide and their ability to cope with the challenges of caregiving. Nurses are in an excellent position to provide training to empower caregivers with the knowledge and skills necessary to reduce and better manage behaviors. This article reviews advances in geriatric nursing theory, practice, and research based on the Progressively Lowered Stress Threshold (PLST) model that are designed to promote more adaptive and functional behavior in older adults with advancing dementia. For more than 17 years, the model has been used to train caregivers in homes, adult day programs, nursing homes, and acute care hospitals and has served as the theoretical basis for in-home and institutional studies. Care planning principles and key elements of interventions that flow from the model are set forth, and outcomes from numerous research projects using the PLST model are presented. PMID:15450057

  8. Modeling habitat split: landscape and life history traits determine amphibian extinction thresholds.

    PubMed

    Fonseca, Carlos Roberto; Coutinho, Renato M; Azevedo, Franciane; Berbert, Juliana M; Corso, Gilberto; Kraenkel, Roberto A

    2013-01-01

    Habitat split is a major force behind the worldwide decline of amphibian populations, causing community change in richness and species composition. In fragmented landscapes, natural remnants, the terrestrial habitat of the adults, are frequently separated from streams, the aquatic habitat of the larvae. An important question is how this landscape configuration affects population levels and if it can drive species to extinction locally. Here, we put forward the first theoretical model on habitat split which is particularly concerned on how split distance - the distance between the two required habitats - affects population size and persistence in isolated fragments. Our diffusive model shows that habitat split alone is able to generate extinction thresholds. Fragments occurring between the aquatic habitat and a given critical split distance are expected to hold viable populations, while fragments located farther away are expected to be unoccupied. Species with higher reproductive success and higher diffusion rate of post-metamorphic youngs are expected to have farther critical split distances. Furthermore, the model indicates that negative effects of habitat split are poorly compensated by positive effects of fragment size. The habitat split model improves our understanding about spatially structured populations and has relevant implications for landscape design for conservation. It puts on a firm theoretical basis the relation between habitat split and the decline of amphibian populations.

  9. Marker-based monitoring of seated spinal posture using a calibrated single-variable threshold model.

    PubMed

    Walsh, Pauline; Dunne, Lucy E; Caulfield, Brian; Smyth, Barry

    2006-01-01

    This work, as part of a larger project developing wearable posture monitors for the work environment, seeks to monitor and model seated posture during computer use. A non-wearable marker-based optoelectronic motion capture system was used to monitor seated posture for ten healthy subjects during a calibration exercise and a typing task. Machine learning techniques were used to select overall spinal sagittal flexion as the best indicator of posture from a set of marker and vector variables. Overall flexion data from the calibration exercise were used to define a threshold model designed to classify posture for each subject, which was then applied to the typing task data. Results of the model were analysed visually by qualified physiotherapists with experience in ergonomics and posture analysis to confirm the accuracy of the calibration. The calibration formula was found to be accurate on 100% subjects. This process will be used as a comparative measure in the evaluation of several wearable posture sensors, and to inform the design of the wearable system. PMID:17946301

  10. Centrifuge model study of thresholds for rainfall-induced landslides in sandy slopes

    NASA Astrophysics Data System (ADS)

    Matziaris, V.; Marshall, A. M.; Heron, C. M.; Yu, H.-S.

    2015-09-01

    Rainfall-induced landslides are very common natural disasters which cause damage to properties and infrastructure and may result in the loss of human life. These phenomena often take place in unsaturated soil slopes and are triggered by the saturation of the soil profile due to rain infiltration which leads to the decrease of effective stresses and loss of shear strength. The aim of this study is to determine rainfall thresholds for the initiation of landslides under different initial conditions. Model tests of rainfall-induced landslides were conducted on the Nottingham Centre for Geomechanics geotechnical centrifuge. Initially unsaturated plane-strain slope models made with fine silica sand were prepared at varying densities at 1g and accommodated within a centrifuge container with rainfall simulator. During the centrifuge flight at 60g, rainfall events of varying intensity and duration, as well as variation of groundwater conditions, were applied to the slope models with the aim of initiating slope failure. This paper presents a discussion on the impact of soil state properties, rainfall characteristics, and groundwater conditions on slope behaviour and the initiation of slope instability.

  11. Global and local threshold in a metapopulational SEIR model with quarantine

    NASA Astrophysics Data System (ADS)

    Gomes, Marcelo F. C.; Rossi, Luca; Pastore Y Piontti, Ana; Vespignani, Alessandro

    2013-03-01

    Diseases which have the possibility of transmission before the onset of symptoms pose a challenging threat to healthcare since it is hard to track spreaders and implement quarantine measures. More precisely, one main concerns regarding pandemic spreading of diseases is the prediction-and eventually control-of local outbreaks that will trigger a global invasion of a particular disease. We present a metapopulation disease spreading model with transmission from both symptomatic and asymptomatic agents and analyze the role of quarantine measures and mobility processes between subpopulations. We show that, depending on the disease parameters, it is possible to separate in the parameter space the local and global thresholds and study the system behavior as a function of the fraction of asymptomatic transmissions. This means that it is possible to have a range of parameters values where although we do not achieve local control of the outbreak it is possible to control the global spread of the disease. We validate the analytic picture in data-driven model that integrates commuting, air traffic flow and detailed information about population size and structure worldwide. Laboratory for the Modeling of Biological and Socio-Technical Systems (MoBS)

  12. A model with heterogeneous thresholds for subjective traits: fat cover and conformation score in the Pirenaica beef cattle.

    PubMed

    Varona, L; Moreno, C; Altarriba, J

    2009-04-01

    Current selection schemes for livestock improvement use a wide variety of phenotypic traits. Some of them, such as sensory, type, or carcass traits, obtain their records from subjective grading performed by trained technicians. Data from this subjective evaluation usually involve classification under a categorical and arbitrary predefined scale, whose output may lead to strong departures from the Gaussian distribution. In addition, the scale of grading may be different according to different technicians. To study this phenomenon, we have analyzed subjective conformation (CON) and fat cover (FAT) scores in the Pirenaica beef cattle breed from data provided by 12 different slaughterhouses. Three statistical models were used: 1) a Gaussian linear model; 2) an ordered category threshold model; and 3) a specific slaughterhouse ordered category threshold model. These models were analyzed through a Bayesian analysis via a Gibbs sampler with a data augmentation step. Posterior mean estimates of heritability ranged from 0.23 to 0.26 for CON, and from 0.13 to 0.16 for FAT. Statistical models were compared by the deviance information criteria, and the slaughterhouse-specific ordered category threshold model was selected as the most plausible. This result was confirmed by the fact that the threshold estimates differed noticeably between slaughterhouses. Finally, the proposed model for genetic evaluation increased the expected selection response by up to 7.6% for CON and 11.2% for FAT.

  13. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  14. Time-course and dose-response relationships of imperatorin in the mouse maximal electroshock seizure threshold model.

    PubMed

    Luszczki, Jarogniew J; Glowniak, Kazimierz; Czuczwar, Stanislaw J

    2007-09-01

    This study was designed to evaluate the anticonvulsant effects of imperatorin (a furanocoumarin isolated from fruits of Angelica archangelica) in the mouse maximal electroshock seizure threshold model. The threshold for electroconvulsions in mice was determined at several times: 15, 30, 60 and 120 min after i.p. administration of imperatorin at increasing doses of 10, 20, 30, 40, 50 and 100 mg/kg. The evaluation of time-course relationship for imperatorin in the maximal electroshock seizure threshold test revealed that the agent produced its maximum antielectroshock action at 30 min after its i.p. administration. In this case, imperatorin at doses of 50 and 100 mg/kg significantly raised the threshold for electroconvulsions in mice by 38 and 68% (P<0.05 and P<0.001), respectively. The antiseizure effects produced by imperatorin at 15, 60 and 120 min after its systemic (i.p.) administration were less expressed than those observed for imperatorin injected 30 min before the maximal electroshock seizure threshold test. Based on this study, one can conclude that imperatorin produces the anticonvulsant effect in the maximal electroshock seizure threshold test in a dose-dependent manner. PMID:17602770

  15. A study of jet fuel sooting tendency using the threshold sooting index (TSI) model

    SciTech Connect

    Yang, Yi; Boehman, Andre L.; Santoro, Robert J.

    2007-04-15

    Fuel composition can have a significant effect on soot formation during gas turbine combustion. Consequently, this paper contains a comprehensive review of the relationship between fuel hydrocarbon composition and soot formation in gas turbine combustors. Two levels of correlation are identified. First, lumped fuel composition parameters such as hydrogen content and smoke point, which are conventionally used to represent fuel sooting tendency, are correlated with soot formation in practical combustors. Second, detailed fuel hydrocarbon composition is correlated with these lumped parameters. The two-level correlation makes it possible to predict soot formation in practical combustors from basic fuel composition data. Threshold sooting index (TSI), which correlates linearly with the ratio of fuel molecular weight and smoke point in a diffusion flame, is proposed as a new lumped parameter for sooting tendency correlation. It is found that the TSI model correlates excellently with hydrocarbon compositions over a wide range of fuel samples. Also, in predicting soot formation in actual combustors, the TSI model produces the best results overall in comparison with other previously reported correlating parameters, including hydrogen content, smoke point, and composite predictors containing more than one parameter. (author)

  16. Overcoming pain thresholds with multilevel models-an example using quantitative sensory testing (QST) data.

    PubMed

    Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris

    2015-01-01

    The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice. PMID:26557435

  17. T Lymphocyte Activation Threshold and Membrane Reorganization Perturbations in Unique Culture Model

    NASA Technical Reports Server (NTRS)

    Adams, C. L.; Sams, C. F.

    2000-01-01

    Quantitative activation thresholds and cellular membrane reorganization are mechanisms by which resting T cells modulate their response to activating stimuli. Here we demonstrate perturbations of these cellular processes in a unique culture system that non-invasively inhibits T lymphocyte activation. During clinorotation, the T cell activation threshold is increased 5-fold. This increased threshold involves a mechanism independent of TCR triggering. Recruitment of lipid rafts to the activation site is impaired during clinorotation but does occur with increased stimulation. This study describes a situation in which an individual cell senses a change in its physical environment and alters its cell biological behavior.

  18. Coherence thresholds in models of language change and evolution: The effects of noise, dynamics, and network of interactions

    NASA Astrophysics Data System (ADS)

    Tavares, J. M.; Telo da Gama, M. M.; Nunes, A.

    2008-04-01

    A simple model of language evolution proposed by Komarova, Niyogi, and Nowak is characterized by a payoff in communicative function and by an error in learning that measure the accuracy in language acquisition. The time scale for language change is generational, and the model’s equations in the mean-field approximation are a particular case of the replicator-mutator equations of evolutionary dynamics. In well-mixed populations, this model exhibits a critical coherence threshold; i.e., a minimal accuracy in the learning process is required to maintain linguistic coherence. In this work, we analyze in detail the effects of different fitness-based dynamics driving linguistic coherence and of the network of interactions on the nature of the coherence threshold by performing numerical simulations and theoretical analyses of three different models of language change in finite populations with two types of structure: fully connected networks and regular random graphs. We find that although the threshold of the original replicator-mutator evolutionary model is robust with respect to the structure of the network of contacts, the coherence threshold of related fitness-driven models may be strongly affected by this feature.

  19. Improving Landslide Susceptibility Modeling Using an Empirical Threshold Scheme for Excluding Landslide Deposition

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Lai, J. S.; Chiang, S. H.

    2015-12-01

    Landslides are frequently triggered by typhoons and earthquakes in Taiwan, causing serious economic losses and human casualties. Remotely sensed images and geo-spatial data consisting of land-cover and environmental information have been widely used for producing landslide inventories and causative factors for slope stability analysis. Landslide susceptibility, on the other hand, can represent the spatial likelihood of landslide occurrence and is an important basis for landslide risk assessment. As multi-temporal satellite images become popular and affordable, they are commonly used to generate landslide inventories for subsequent analysis. However, it is usually difficult to distinguish different landslide sub-regions (scarp, debris flow, deposition etc.) directly from remote sensing imagery. Consequently, the extracted landslide extents using image-based visual interpretation and automatic detections may contain many depositions that may reduce the fidelity of the landslide susceptibility model. This study developed an empirical thresholding scheme based on terrain characteristics for eliminating depositions from detected landslide areas to improve landslide susceptibility modeling. In this study, Bayesian network classifier is utilized to build a landslide susceptibility model and to predict sequent rainfall-induced shallow landslides in the Shimen reservoir watershed located in northern Taiwan. Eleven causative factors are considered, including terrain slope, aspect, curvature, elevation, geology, land-use, NDVI, soil, distance to fault, river and road. Landslide areas detected using satellite images acquired before and after eight typhoons between 2004 to 2008 are collected as the main inventory for training and verification. In the analysis, previous landslide events are used as training data to predict the samples of the next event. The results are then compared with recorded landslide areas in the inventory to evaluate the accuracy. Experimental results

  20. A population-based Habitable Zone perspective

    NASA Astrophysics Data System (ADS)

    Zsom, Andras

    2015-08-01

    What can we tell about exoplanet habitability if currently only the stellar properties, planet radius, and the incoming stellar flux are known? The Habitable Zone (HZ) is the region around stars where planets can harbor liquid water on their surfaces. The HZ is traditionally conceived as a sharp region around the star because it is calculated for one planet with specific properties e.g., Earth-like or desert planets , or rocky planets with H2 atmospheres. Such planet-specific approach is limiting because the planets’ atmospheric and geophysical properties, which influence the surface climate and the presence of liquid water, are currently unknown but expected to be diverse.A statistical HZ description is outlined which does not select one specific planet type. Instead the atmospheric and surface properties of exoplanets are treated as random variables and a continuous range of planet scenarios are considered. Various probability density functions are assigned to each observationally unconstrained random variable, and a combination of Monte Carlo sampling and climate modeling is used to generate synthetic exoplanet populations with known surface climates. Then, the properties of the liquid water bearing subpopulation is analyzed.Given our current observational knowledge of small exoplanets, the HZ takes the form of a weakly-constrained but smooth probability function. The model shows that the HZ has an inner edge: it is unlikely that planets receiving two-three times more stellar radiation than Earth can harbor liquid water. But a clear outer edge is not seen: a planet that receives a fraction of Earth's stellar radiation (1-10%) can be habitable, if the greenhouse effect of the atmosphere is strong enough. The main benefit of the population-based approach is that it will be refined over time as new data on exoplanets and their atmospheres become available.

  1. Error threshold in topological quantum-computing models with color codes

    NASA Astrophysics Data System (ADS)

    Katzgraber, Helmut; Bombin, Hector; Martin-Delgado, Miguel A.

    2009-03-01

    Dealing with errors in quantum computing systems is possibly one of the hardest tasks when attempting to realize physical devices. By encoding the qubits in topological properties of a system, an inherent protection of the quantum states can be achieved. Traditional topologically-protected approaches are based on the braiding of quasiparticles. Recently, a braid-less implementation using brane-net condensates in 3-colexes has been proposed. In 2D it allows the transversal implementation of the whole Clifford group of quantum gates. In this work, we compute the error threshold for this topologically-protected quantum computing system in 2D, by means of mapping its error correction process onto a random 3-body Ising model on a triangular lattice. Errors manifest themselves as random perturbation of the plaquette interaction terms thus introducing frustration. Our results from Monte Carlo simulations suggest that these topological color codes are similarly robust to perturbations as the toric codes. Furthermore, they provide more computational capabilities and the possibility of having more qubits encoded in the quantum memory.

  2. [Automatic detection of exudates in retinal images based on threshold moving average models].

    PubMed

    Wisaeng, K; Hiransakolwong, N; Pothiruk, E

    2015-01-01

    Since exudate diagnostic procedures require the attention of an expert ophthalmologist as well as regular monitoring of the disease, the workload of expert ophthalmologists will eventually exceed the current screening capabilities. Retinal imaging technology is a current practice screening capability providing a great potential solution. In this paper, a fast and robust automatic detection of exudates based on moving average histogram models of the fuzzy image was applied, and then the better histogram was derived. After segmentation of the exudate candidates, the true exudates were pruned based on Sobel edge detector and automatic Otsu's thresholding algorithm that resulted in the accurate location of the exudates in digital retinal images. To compare the performance of exudate detection methods we have constructed a large database of digital retinal images. The method was trained on a set of 200 retinal images, and tested on a completely independent set of 1220 retinal images. Results show that the exudate detection method performs overall best sensitivity, specificity, and accuracy of 90.42%, 94.60%, and 93.69%, respectively. PMID:26016034

  3. Analytic Model for Description of Above-Threshold Ionization by an Intense, Short Laser Pulse

    NASA Astrophysics Data System (ADS)

    Starace, Anthony F.; Frolov, M. V.; Knyazeva, D. V.; Manakov, N. L.; Geng, J.-W.; Peng, L.-Y.

    2015-05-01

    We present an analytic model for above-threshold ionization (ATI) of an atom by an intense, linearly-polarized short laser pulse. Our quantum analysis provides closed-form formulas for the differential probability of ATI, with amplitudes given by a coherent sum of partial amplitudes describing ionization by neighboring optical cycles near the peak of the intensity envelope of a short laser pulse. These analytic results explain key features of short-pulse ATI spectra, such as the left-right asymmetry in the ionized electron angular distribution, the multi-plateau structures, and both large-scale and fine-scale oscillation patterns resulting from quantum interferences of electron trajectories. The ATI spectrum in the middle part of the ATI plateau is shown to be sensitive to the spatial symmetry of the initial bound state of the active electron owing to contributions from multiple-return electron trajectories. An extension of our analytic formulas to real atoms provides results that are in good agreement with results of numerical solutions of the time-dependent Schrödinger equation for He and Ar atoms. Research supported in part by NSF Grant No. PHY-1208059, by RFBR Grant No. 13-02-00420, by Ministry of Ed. & Sci. of the Russian Fed. Proj. No. 1019, by NNSFC Grant Nos. 11322437, 11174016, and 11121091, and by the Dynasty Fdn. (MVF & DVK).

  4. Modeling direction discrimination thresholds for yaw rotations around an earth-vertical axis for arbitrary motion profiles.

    PubMed

    Soyka, Florian; Giordano, Paolo Robuffo; Barnett-Cowan, Michael; Bülthoff, Heinrich H

    2012-07-01

    Understanding the dynamics of vestibular perception is important, for example, for improving the realism of motion simulation and virtual reality environments or for diagnosing patients suffering from vestibular problems. Previous research has found a dependence of direction discrimination thresholds for rotational motions on the period length (inverse frequency) of a transient (single cycle) sinusoidal acceleration stimulus. However, self-motion is seldom purely sinusoidal, and up to now, no models have been proposed that take into account non-sinusoidal stimuli for rotational motions. In this work, the influence of both the period length and the specific time course of an inertial stimulus is investigated. Thresholds for three acceleration profile shapes (triangular, sinusoidal, and trapezoidal) were measured for three period lengths (0.3, 1.4, and 6.7 s) in ten participants. A two-alternative forced-choice discrimination task was used where participants had to judge if a yaw rotation around an earth-vertical axis was leftward or rightward. The peak velocity of the stimulus was varied, and the threshold was defined as the stimulus yielding 75 % correct answers. In accordance with previous research, thresholds decreased with shortening period length (from ~2 deg/s for 6.7 s to ~0.8 deg/s for 0.3 s). The peak velocity was the determining factor for discrimination: Different profiles with the same period length have similar velocity thresholds. These measurements were used to fit a novel model based on a description of the firing rate of semi-circular canal neurons. In accordance with previous research, the estimates of the model parameters suggest that velocity storage does not influence perceptual thresholds.

  5. Transfer model of lead in soil-carrot (Daucus carota L.) system and food safety thresholds in soil.

    PubMed

    Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang

    2015-09-01

    Reliable empirical models describing lead (Pb) transfer in soil-plant systems are needed to improve soil environmental quality standards. A greenhouse experiment was conducted to develop soil-plant transfer models to predict Pb concentrations in carrot (Daucus carota L.). Soil thresholds for food safety were then derived inversely using the prediction model in view of the maximum allowable limit for Pb in food. The 2 most important soil properties that influenced carrot Pb uptake factor (ratio of Pb concentration in carrot to that in soil) were soil pH and cation exchange capacity (CEC), as revealed by path analysis. Stepwise multiple linear regression models were based on soil properties and the pseudo total (aqua regia) or extractable (0.01 M CaCl2 and 0.005 M diethylenetriamine pentaacetic acid) soil Pb concentrations. Carrot Pb contents were best explained by the pseudo total soil Pb concentrations in combination with soil pH and CEC, with the percentage of variation explained being up to 93%. The derived soil thresholds based on added Pb (total soil Pb with the geogenic background part subtracted) have the advantage of better applicability to soils with high natural background Pb levels. Validation of the thresholds against data from field trials and literature studies indicated that the proposed thresholds are reasonable and reliable.

  6. Use of a threshold animal model to estimate calving ease and stillbirth (co)variance components for US Holsteins

    Technology Transfer Automated Retrieval System (TEKTRAN)

    (Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...

  7. A Violation of the Conditional Independence Assumption in the Two-High-Threshold Model of Recognition Memory

    ERIC Educational Resources Information Center

    Chen, Tina; Starns, Jeffrey J.; Rotello, Caren M.

    2015-01-01

    The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are…

  8. Modelling single shot damage thresholds of multilayer optics for high-intensity short-wavelength radiation sources.

    PubMed

    Loch, R A; Sobierajski, R; Louis, E; Bosgra, J; Bijkerk, F

    2012-12-17

    The single shot damage thresholds of multilayer optics for high-intensity short-wavelength radiation sources are theoretically investigated, using a model developed on the basis of experimental data obtained at the FLASH and LCLS free electron lasers. We compare the radiation hardness of commonly used multilayer optics and propose new material combinations selected for a high damage threshold. Our study demonstrates that the damage thresholds of multilayer optics can vary over a large range of incidence fluences and can be as high as several hundreds of mJ/cm(2). This strongly suggests that multilayer mirrors are serious candidates for damage resistant optics. Especially, multilayer optics based on Li(2)O spacers are very promising for use in current and future short-wavelength radiation sources.

  9. Evaluation of the pentylenetetrazole seizure threshold test in epileptic mice as surrogate model for drug testing against pharmacoresistant seizures.

    PubMed

    Töllner, Kathrin; Twele, Friederike; Löscher, Wolfgang

    2016-04-01

    Resistance to antiepileptic drugs (AEDs) is a major problem in epilepsy therapy, so that development of more effective AEDs is an unmet clinical need. Several rat and mouse models of epilepsy with spontaneous difficult-to-treat seizures exist, but because testing of antiseizure drug efficacy is extremely laborious in such models, they are only rarely used in the development of novel AEDs. Recently, the use of acute seizure tests in epileptic rats or mice has been proposed as a novel strategy for evaluating novel AEDs for increased antiseizure efficacy. In the present study, we compared the effects of five AEDs (valproate, phenobarbital, diazepam, lamotrigine, levetiracetam) on the pentylenetetrazole (PTZ) seizure threshold in mice that were made epileptic by pilocarpine. Experiments were started 6 weeks after a pilocarpine-induced status epilepticus. At this time, control seizure threshold was significantly lower in epileptic than in nonepileptic animals. Unexpectedly, only one AED (valproate) was less effective to increase seizure threshold in epileptic vs. nonepileptic mice, and this difference was restricted to doses of 200 and 300 mg/kg, whereas the difference disappeared at 400mg/kg. All other AEDs exerted similar seizure threshold increases in epileptic and nonepileptic mice. Thus, induction of acute seizures with PTZ in mice pretreated with pilocarpine does not provide an effective and valuable surrogate method to screen drugs for antiseizure efficacy in a model of difficult-to-treat chronic epilepsy as previously suggested from experiments with this approach in rats. PMID:26930359

  10. Genetic parameters for calving rate and calf survival from linear, threshold, and logistic models in a multibreed beef cattle population.

    PubMed

    Guerra, J L L; Franke, D E; Blouin, D C

    2006-12-01

    Generalized mixed linear, threshold, and logistic sire models and Markov chain, Monte Carlo simulation procedures were used to estimate genetic parameters for calving rate and calf survival in a multibreed beef cattle population. Data were obtained from a 5-generation rotational crossbreeding study involving Angus, Brahman, Charolais, and Hereford (1969 to 1995). Gelbvieh and Simmental bulls sired terminal-cross calves from a sample of generation 5 cows. A total of 1,458 cows sired by 158 bulls had a mean calving rate of 78% based on 4,808 calving records. Ninety-one percent of 5,015 calves sired by 260 bulls survived to weaning. Mean heritability estimates and standard deviations for daughter calving rate from posterior distributions were 0.063 +/- 0.024, 0.150 +/- 0.049, and 0.130 +/- 0.047 for linear, threshold, and logistic models, respectively. For calf survival, mean heritability estimates and standard deviations from posterior distributions were 0.049 +/- 0.022, 0.160 +/- 0.058, and 0.190 +/- 0.078 from linear, threshold, and logistic models, respectively. When transformed to an underlying normal scale, linear sire, mixed model, heritability estimates were similar to threshold and logistic sire mixed model estimates. Posterior density distributions of estimated heritabilities from all models were normal. Spearman rank correlations between sire EPD across statistical models were greater than 0.97 for daughter calving rate and for calf survival. Sire EPD had similar ranges across statistical models for daughter calving rate and for calf survival. PMID:17093211

  11. `Getting stuck' in analogue electronics: threshold concepts as an explanatory model

    NASA Astrophysics Data System (ADS)

    Harlow, A.; Scott, J.; Peter, M.; Cowie, B.

    2011-10-01

    Could the challenge of mastering threshold concepts be a potential factor that influences a student's decision to continue in electronics engineering? This was the question that led to a collaborative research project between educational researchers and the Faculty of Engineering in a New Zealand university. This paper deals exclusively with the qualitative data from this project, which was designed to investigate the high attrition rate of students taking introductory electronics in a New Zealand university. The affordances of the various teaching opportunities and the barriers that students perceived are examined in the light of recent international research in the area of threshold concepts and transformational learning. Suggestions are made to help students move forward in their thinking, without compromising the need for maintaining the element of intellectual uncertainty that is crucial for tertiary teaching. The issue of the timing of assessments as a measure of conceptual development or the crossing of thresholds is raised.

  12. Deciphering and modeling interconnections in ecohydrology: The role of scale, thresholds and stochastic storage processes

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; McDonnell, J. J.; Porporato, A. M.

    2013-12-01

    Several components of ecohydrological systems are characterized by an interplay of stochastic inputs, finite capacity storage, and nonlinear, threshold-like losses, resulting in a complex partitioning of the rainfall input between the different basin scales. With the goal of more accurate predictions of rainfall partitioning and threshold effects in ecohydrology, we examine ecohydrological processes at the various scales, including canopy interception, soil storage with runoff/percolation, hillslope filling-spilling mechanisms, and the related groundwater recharge and baseflow contribution to streamflow. We apply a probabilistic approach to a hierarchical arrangement of cascading reservoirs that are representative of the components of the basin system. The analytical results of this framework help single out the key parameters controlling the partitioning of rainfall within the storage compartments of river basins. This theoretical framework is a useful learning tool for exploring the physical meaning of known thresholds in ecohydrology.

  13. Representation of Vegetation and Other Nonerodible Elements in Aeolian Shear Stress Partitioning Models for Predicting Transport Threshold

    NASA Technical Reports Server (NTRS)

    King, James; Nickling, William G.; Gillies, John A.

    2005-01-01

    The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.

  14. Discrimination thresholds of normal and anomalous trichromats: Model of senescent changes in ocular media density on the Cambridge Colour Test.

    PubMed

    Shinomori, Keizo; Panorgias, Athanasios; Werner, John S

    2016-03-01

    Age-related changes in chromatic discrimination along dichromatic confusion lines were measured with the Cambridge Colour Test (CCT). One hundred and sixty-two individuals (16 to 88 years old) with normal Rayleigh matches were the major focus of this paper. An additional 32 anomalous trichromats classified by their Rayleigh matches were also tested. All subjects were screened to rule out abnormalities of the anterior and posterior segments. Thresholds on all three chromatic vectors measured with the CCT showed age-related increases. Protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bilinear model. Analysis and modeling demonstrated that the nominal vectors of the CCT are shifted by senescent changes in ocular media density, and a method for correcting the CCT vectors is demonstrated. A correction for these shifts indicates that classification among individuals of different ages is unaffected. New vector thresholds for elderly observers and for all age groups are suggested based on calculated tolerance limits.

  15. Predator harvesting in stage dependent predation models: insights from a threshold management policy.

    PubMed

    Costa, Michel Iskin da Silveira

    2008-11-01

    Stage dependent predation may give rise to the hydra effect--the increase of predator density at equilibrium as its mortality rate is raised. Management strategies that adjust predator harvest rates or quotas based on responses of populations to past changes in capture rates may eventually lead to a catastrophic collapse of predator species. A proposed threshold management policy avoids the hydra effect and its subsequent danger of predator extinction. Suggestions to extend the application of threshold policies in areas such as intermediate disturbance hypothesis, density-trait mediated interactions and non-optimal anti-predatory behavior are put forward.

  16. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  17. PT -breaking threshold in spatially asymmetric Aubry-André and Harper models: Hidden symmetry and topological states

    NASA Astrophysics Data System (ADS)

    Harter, Andrew K.; Lee, Tony E.; Joglekar, Yogesh N.

    2016-06-01

    Aubry-André-Harper lattice models, characterized by a reflection-asymmetric sinusoidally varying nearest-neighbor tunneling profile, are well known for their topological properties. We consider the fate of such models in the presence of balanced gain and loss potentials ±i γ located at reflection-symmetric sites. We predict that these models have a finite PT -breaking threshold only for specific locations of the gain-loss potential and uncover a hidden symmetry that is instrumental to the finite threshold strength. We also show that the topological edge states remain robust in the PT -symmetry-broken phase. Our predictions substantially broaden the possible experimental realizations of a PT -symmetric system.

  18. Analytical model of threshold voltage degradation due to localized charges in gate material engineered Schottky barrier cylindrical GAA MOSFETs

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Haldar, Subhasis; Gupta, Mridula; Gupta, R. S.

    2016-10-01

    The threshold voltage degradation due to the hot carrier induced localized charges (LC) is a major reliability concern for nanoscale Schottky barrier (SB) cylindrical gate all around (GAA) metal-oxide-semiconductor field-effect transistors (MOSFETs). The degradation physics of gate material engineered (GME)-SB-GAA MOSFETs due to LC is still unexplored. An explicit threshold voltage degradation model for GME-SB-GAA-MOSFETs with the incorporation of localized charges (N it) is developed. To accurately model the threshold voltage the minimum channel carrier density has been taken into account. The model renders how +/- LC affects the device subthreshold performance. One-dimensional (1D) Poisson’s and 2D Laplace equations have been solved for two different regions (fresh and damaged) with two different gate metal work-functions. LCs are considered at the drain side with low gate metal work-function as N it is more vulnerable towards the drain. For the reduction of carrier mobility degradation, a lightly doped channel has been considered. The proposed model also includes the effect of barrier height lowering at the metal-semiconductor interface. The developed model results have been verified using numerical simulation data obtained by the ATLAS-3D device simulator and excellent agreement is observed between analytical and simulation results.

  19. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    SciTech Connect

    Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  20. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    DOE PAGESBeta

    Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique valuemore » but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.« less

  1. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    SciTech Connect

    Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  2. "Getting Stuck" in Analogue Electronics: Threshold Concepts as an Explanatory Model

    ERIC Educational Resources Information Center

    Harlow, A.; Scott, J.; Peter, M.; Cowie, B.

    2011-01-01

    Could the challenge of mastering threshold concepts be a potential factor that influences a student's decision to continue in electronics engineering? This was the question that led to a collaborative research project between educational researchers and the Faculty of Engineering in a New Zealand university. This paper deals exclusively with the…

  3. A Multinomial Model for Identifying Significant Pure-Tone Threshold Shifts

    ERIC Educational Resources Information Center

    Schlauch, Robert S.; Carney, Edward

    2007-01-01

    Purpose: Significant threshold differences on retest for pure-tone audiometry are often evaluated by application of ad hoc rules, such as a shift in a pure-tone average or in 2 adjacent frequencies that exceeds a predefined amount. Rules that are so derived do not consider the probability of observing a particular audiogram. Methods: A general…

  4. Single-Event Upset (SEU) model verification and threshold determination using heavy ions in a bipolar static RAM

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.

    1985-01-01

    Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.

  5. A Threshold Shear Force for Calcium Influx in an Astrocyte Model of Traumatic Brain Injury

    PubMed Central

    Maneshi, Mohammad Mehdi; Sachs, Frederick

    2015-01-01

    Abstract Traumatic brain injury (TBI) refers to brain damage resulting from external mechanical force, such as a blast or crash. Our current understanding of TBI is derived mainly from in vivo studies that show measurable biological effects on neurons sampled after TBI. Little is known about the early responses of brain cells during stimuli and which features of the stimulus are most critical to cell injury. We generated defined shear stress in a microfluidic chamber using a fast pressure servo and examined the intracellular Ca2+ levels in cultured adult astrocytes. Shear stress increased intracellular Ca2+ depending on the magnitude, duration, and rise time of the stimulus. Square pulses with a fast rise time (∼2 ms) caused transient increases in intracellular Ca2+, but when the rise time was extended to 20 ms, the response was much less. The threshold for a response is a matrix of multiple parameters. Cells can integrate the effect of shear force from repeated challenges: A pulse train of 10 narrow pulses (11.5 dyn/cm2 and 10 ms wide) resulted in a 4-fold increase in Ca2+ relative to a single pulse of the same amplitude 100 ms wide. The Ca2+ increase was eliminated in Ca2+-free media, but was observed after depleting the intracellular Ca2+ stores with thapsigargin suggesting the need for a Ca2+ influx. The Ca2+ influx was inhibited by extracellular Gd3+, a nonspecific inhibitor of mechanosensitive ion channels, but it was not affected by the more specific inhibitor, GsMTx4. The voltage-gated channel blockers, nifedipine, diltiazem, and verapamil, were also ineffective. The data show that the mechanically induced Ca2+ influx commonly associated with neuron models for TBI is also present in astrocytes, and there is a viscoelastic/plastic coupling of shear stress to the Ca2+ influx. The site of Ca2+ influx has yet to be determined. PMID:25442327

  6. A new analytical threshold voltage model for symmetrical double-gate MOSFETs with high- k gate dielectrics

    NASA Astrophysics Data System (ADS)

    Chiang, T. K.; Chen, M. L.

    2007-03-01

    Based on the fully two-dimensional (2D) Poisson's solution in both silicon film and insulator layer, a compact and analytical threshold voltage model, which accounts for the fringing field effect of the short channel symmetrical double-gate (SDG) MOSFETs, has been developed. Exploiting the new model, a concerned analysis combining FIBL-enhanced short-channel effects and high- k gate dielectrics assess their overall impact on SDG MOSFET's scaling. It is found that for the same equivalent oxide thickness, the gate insulator with high- k dielectric constant which keeps a great characteristic length allows less design space than SiO 2 to sustain the same FIBL induced threshold voltage degradation.

  7. Modeling of high composition AlGaN channel high electron mobility transistors with large threshold voltage

    SciTech Connect

    Bajaj, Sanyam Hung, Ting-Hsiang; Akyol, Fatih; Nath, Digbijoy; Rajan, Siddharth

    2014-12-29

    We report on the potential of high electron mobility transistors (HEMTs) consisting of high composition AlGaN channel and barrier layers for power switching applications. Detailed two-dimensional (2D) simulations show that threshold voltages in excess of 3 V can be achieved through the use of AlGaN channel layers. We also calculate the 2D electron gas mobility in AlGaN channel HEMTs and evaluate their power figures of merit as a function of device operating temperature and Al mole fraction in the channel. Our models show that power switching transistors with AlGaN channels would have comparable on-resistance to GaN-channel based transistors for the same operation voltage. The modeling in this paper shows the potential of high composition AlGaN as a channel material for future high threshold enhancement mode transistors.

  8. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception

    PubMed Central

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term “cross-matching,” represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth

  9. Precise determination of the critical percolation threshold for the three-dimensional ``Swiss cheese'' model using a growth algorithm

    NASA Astrophysics Data System (ADS)

    Lorenz, Christian D.; Ziff, Robert M.

    2001-02-01

    Precise values for the critical threshold for the three-dimensional "Swiss cheese" continuum percolation model have been calculated using extensive Monte Carlo simulations. These simulations used a growth algorithm and memory blocking scheme similar to what we used previously in three-dimensional lattice percolation. The simulations yield a value for the critical number density nc=0.652 960±0.000 005, which confirms recent work but extends the precision by two significant figures.

  10. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception.

    PubMed

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term "cross-matching," represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth perception.

  11. Combining physiological threshold knowledge to species distribution models is key to improving forecasts of the future niche for macroalgae.

    PubMed

    Martínez, Brezo; Arenas, Francisco; Trilla, Alba; Viejo, Rosa M; Carreño, Francisco

    2015-04-01

    Species distribution models (SDM) are a useful tool for predicting species range shifts in response to global warming. However, they do not explore the mechanisms underlying biological processes, making it difficult to predict shifts outside the environmental gradient where the model was trained. In this study, we combine correlative SDMs and knowledge on physiological limits to provide more robust predictions. The thermal thresholds obtained in growth and survival experiments were used as proxies of the fundamental niches of two foundational marine macrophytes. The geographic projections of these species' distributions obtained using these thresholds and existing SDMs were similar in areas where the species are either absent-rare or frequent and where their potential and realized niches match, reaching consensus predictions. The cold-temperate foundational seaweed Himanthalia elongata was predicted to become extinct at its southern limit in northern Spain in response to global warming, whereas the occupancy of southern-lusitanic Bifurcaria bifurcata was expected to increase. Combined approaches such as this one may also highlight geographic areas where models disagree potentially due to biotic factors. Physiological thresholds alone tended to over-predict species prevalence, as they cannot identify absences in climatic conditions within the species' range of physiological tolerance or at the optima. Although SDMs tended to have higher sensitivity than threshold models, they may include regressions that do not reflect causal mechanisms, constraining their predictive power. We present a simple example of how combining correlative and mechanistic knowledge provides a rapid way to gain insight into a species' niche resulting in consistent predictions and highlighting potential sources of uncertainty in forecasted responses to climate change. PMID:24917488

  12. Revisiting the Economic Injury Level and Economic Threshold Model for Potato Leafhopper (Hemiptera: Cicadellidae) in Alfalfa.

    PubMed

    Chasen, Elissa M; Undersander, Dan J; Cullen, Eileen M

    2015-08-01

    The economic injury level for potato leafhopper, Empoasca fabae (Harris), in alfalfa (Medicago sativa L.) was developed over 30 yr ago. In response to increasing market value of alfalfa, farmers and consultants are interested in reducing the economic threshold for potato leafhopper in alfalfa. To address this question, caged field trials were established on two consecutive potato leafhopper susceptible crops in 2013. Field cages were infested with a range of potato leafhopper densities to create a linear regression of alfalfa yield response. The slopes, or yield loss per insect, for the linear regressions of both trials were used to calculate an economic injury level for a range of current alfalfa market values and control costs. This yield-loss relationship is the first quantification that could be used to help assess whether the economic threshold should be lowered, given the increased market value of alfalfa.

  13. Holes in the Bathtub: Water Table Dependent Services and Threshold Behavior in an Economic Model of Groundwater Extraction

    NASA Astrophysics Data System (ADS)

    Kirk-lawlor, N. E.; Edwards, E. C.

    2012-12-01

    In many groundwater systems, the height of the water table must be above certain thresholds for some types of surface flow to exist. Examples of flows that depend on water table elevation include groundwater baseflow to river systems, groundwater flow to wetland systems, and flow to springs. Meeting many of the goals of sustainable water resource management requires maintaining these flows at certain rates. Water resource management decisions invariably involve weighing tradeoffs between different possible usage regimes and the economic consequences of potential management choices are an important factor in these tradeoffs. Policies based on sustainability may have a social cost from forgoing present income. This loss of income may be worth bearing, but should be well understood and carefully considered. Traditionally, the economic theory of groundwater exploitation has relied on the assumption of a single-cell or "bathtub" aquifer model, which offers a simple means to examine complex interactions between water user and hydrologic system behavior. However, such a model assumes a closed system and does not allow for the simulation of groundwater outflows that depend on water table elevation (e.g. baseflow, springs, wetlands), even though those outflows have value. We modify the traditional single-cell aquifer model by allowing for outflows when the water table is above certain threshold elevations. These thresholds behave similarly to holes in a bathtub, where the outflow is a positive function of the height of the water table above the threshold and the outflow is lost when the water table drops below the threshold. We find important economic consequences to this representation of the groundwater system. The economic value of services provided by threshold-dependent outflows (including non-market value), such as ecosystem services, can be incorporated. The value of services provided by these flows may warrant maintaining the water table at higher levels than would

  14. Application of physiologically-based toxicokinetic modelling in oral-to-dermal extrapolation of threshold doses of cosmetic ingredients.

    PubMed

    Gajewska, M; Worth, A; Urani, C; Briesen, H; Schramm, K-W

    2014-06-16

    The application of physiologically based toxicokinetic (PBTK) modelling in route-to-route (RtR) extrapolation of three cosmetic ingredients: coumarin, hydroquinone and caffeine is shown in this study. In particular, the oral no-observed-adverse-effect-level (NOAEL) doses of these chemicals are extrapolated to their corresponding dermal values by comparing the internal concentrations resulting from oral and dermal exposure scenarios. The PBTK model structure has been constructed to give a good simulation performance of biochemical processes within the human body. The model parameters are calibrated based on oral and dermal experimental data for the Caucasian population available in the literature. Particular attention is given to modelling the absorption stage (skin and gastrointestinal tract) in the form of several sub-compartments. This gives better model prediction results when compared to those of a PBTK model with a simpler structure of the absorption barrier. In addition, the role of quantitative structure-property relationships (QSPRs) in predicting skin penetration is evaluated for the three substances with a view to incorporating QSPR-predicted penetration parameters in the PBTK model when experimental values are lacking. Finally, PBTK modelling is used, first to extrapolate oral NOAEL doses derived from rat studies to humans, and then to simulate internal systemic/liver concentrations - Area Under Curve (AUC) and peak concentration - resulting from specified dermal and oral exposure conditions. Based on these simulations, AUC-based dermal thresholds for the three case study compounds are derived and compared with the experimentally obtained oral threshold (NOAEL) values.

  15. Knickpoint Generation and Persistence Following Base-Level Fall: An Examination of Erosional Thresholds in Sediment Flux Dependent Erosion Models

    NASA Astrophysics Data System (ADS)

    Crosby, B. T.; Whipple, K. X.; Gasparini, N. M.; Wobus, C. W.

    2005-12-01

    Non-lithologic knickpoints, or discrete convexities in longitudinal river profiles, are commonly considered to be the mobile, upstream extent of a transient incisional signal. Downstream of the knickpoint, the landscape is responding to a recent change in base level, uplift rate or climatic condition, while upstream of the knickpoint, the landscape retains its relict form, relatively ignorant the transient signal. Though this model of knickpoint mobility and their capacity to communicate incisional signals throughout basins works well with standard formulations of the stream power erosion model, the recent development of sediment flux dependent erosion models contain explicit thresholds that limit the upstream extent of knickpoint-mediated fluvial adjustment. Sediment flux dependent erosion models fail to communicate incisional signals at small drainage areas as sediment and water discharges are insufficient to effectively erode the bed. As well, if knickpoint slopes increase beyond a threshold value, sediment impacts against the bed become too infrequent and too oblique to continue knickpoint propagation by fluvial mechanisms. This threshold in fluvial erosion could lead to the stagnation of incisional signals and the generation of hanging valleys. This theoretical expectation aligns with our observation that in numerous actively incising landscapes around the world, relict low drainage area basins are often found elevated high above and disconnected from the mainstem by extremely over-steepened channel reaches often composed of one or more near-vertical steps. In order to better understand how river networks respond during transient pulses of incision, we employ a numerical landscape evolution model (CHILD) to test the sensitivity of three different sediment flux dependent erosion models to different base-level fall scenarios. This technique allows us to observe the propagation of the signal throughout a fluvial network composed of tributaries of variable

  16. Spatiotemporal and Spatial Threshold Models for Relating UV Exposures and Skin Cancer in the Central United States.

    PubMed

    Hatfield, Laura A; Hoffbeck, Richard W; Alexander, Bruce H; Carlin, Bradley P

    2009-06-15

    The exact mechanisms relating exposure to ultraviolet (UV) radiation and elevated risk of skin cancer remain the subject of debate. For example, there is disagreement on whether the main risk factor is duration of the exposure, its intensity, or some combination of both. There is also uncertainty regarding the form of the dose-response curve, with many authors believing only exposures exceeding a given (but unknown) threshold are important. In this paper we explore methods to estimate such thresholds using hierarchical spatial logistic models based on a sample of a cohort of x-ray technologists for whom we have self-reports of time spent in the sun and numbers of blistering sunburns in childhood. A preliminary goal is to explore the temporal pattern of UV exposure and its gradient. Changes here would imply that identical exposure self-reports from different calendar years may correspond to differing cancer risks. PMID:20161236

  17. Spatiotemporal and Spatial Threshold Models for Relating UV Exposures and Skin Cancer in the Central United States.

    PubMed

    Hatfield, Laura A; Hoffbeck, Richard W; Alexander, Bruce H; Carlin, Bradley P

    2009-06-15

    The exact mechanisms relating exposure to ultraviolet (UV) radiation and elevated risk of skin cancer remain the subject of debate. For example, there is disagreement on whether the main risk factor is duration of the exposure, its intensity, or some combination of both. There is also uncertainty regarding the form of the dose-response curve, with many authors believing only exposures exceeding a given (but unknown) threshold are important. In this paper we explore methods to estimate such thresholds using hierarchical spatial logistic models based on a sample of a cohort of x-ray technologists for whom we have self-reports of time spent in the sun and numbers of blistering sunburns in childhood. A preliminary goal is to explore the temporal pattern of UV exposure and its gradient. Changes here would imply that identical exposure self-reports from different calendar years may correspond to differing cancer risks.

  18. Deviation from threshold model in ultrafast laser ablation of graphene at sub-micron scale

    SciTech Connect

    Gil-Villalba, A.; Xie, C.; Salut, R.; Furfaro, L.; Giust, R.; Jacquot, M.; Lacourt, P. A.; Dudley, J. M.; Courvoisier, F.

    2015-08-10

    We investigate a method to measure ultrafast laser ablation threshold with respect to spot size. We use structured complex beams to generate a pattern of craters in CVD graphene with a single laser pulse. A direct comparison between beam profile and SEM characterization allows us to determine the dependence of ablation probability on spot-size, for crater diameters ranging between 700 nm and 2.5 μm. We report a drastic decrease of ablation probability when the crater diameter is below 1 μm which we interpret in terms of free-carrier diffusion.

  19. Conditions and threshold for magma transfer in the layered upper crust: Insights from experimental models

    NASA Astrophysics Data System (ADS)

    Ritter, Malte C.; Acocella, Valerio; Ruch, Joel; Philipp, Sonja L.

    2013-12-01

    transfer, i.e., dike propagation, is partly controlled by Young's modulus (elasticity) contrasts (ratio upper layer to lower layer modulus) in the host rock. Here we try to better constrain the elasticity contrasts controlling the propagation velocity of dikes and their arrest. We simulate dike propagation in layered elastic media with different elasticity contrasts. Salted gelatin and water represent host rock and magma, respectively. For common density ratios between magma and host rock (~1.1), velocity variations are observed and a critical threshold in the elasticity contrast between layers results in the Young's modulus ratio of 2.1 ± 0.6. Naturally occurring elasticity contrasts can be much higher than this experimental threshold, suggesting that dike arrest due to heterogeneous elastic host rock properties is more frequent than expected. Examples of recently deflected or stalled dikes inside volcanoes and the common presence of high-velocity bodies below volcanoes suggest that better defining elasticity contrasts below volcanoes helps in forecasting eruptions.

  20. Cavitation thresholds of contrast agents in an in vitro human clot model exposed to 120-kHz ultrasound

    PubMed Central

    Gruber, Matthew J.; Bader, Kenneth B.; Holland, Christy K.

    2014-01-01

    Ultrasound contrast agents (UCAs) can be employed to nucleate cavitation to achieve desired bioeffects, such as thrombolysis, in therapeutic ultrasound applications. Effective methods of enhancing thrombolysis with ultrasound have been examined at low frequencies (<1 MHz) and low amplitudes (<0.5 MPa). The objective of this study was to determine cavitation thresholds for two UCAs exposed to 120-kHz ultrasound. A commercial ultrasound contrast agent (Definity®) and echogenic liposomes were investigated to determine the acoustic pressure threshold for ultraharmonic (UH) and broadband (BB) generation using an in vitro flow model perfused with human plasma. Cavitation emissions were detected using two passive receivers over a narrow frequency bandwidth (540–900 kHz) and a broad frequency bandwidth (0.54–1.74 MHz). UH and BB cavitation thresholds occurred at the same acoustic pressure (0.3 ± 0.1 MPa, peak to peak) and were found to depend on the sensitivity of the cavitation detector but not on the nucleating contrast agent or ultrasound duty cycle. PMID:25234874

  1. Double Photoionization Near Threshold

    NASA Technical Reports Server (NTRS)

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  2. Ground-water vulnerability to nitrate contamination at multiple thresholds in the mid-Atlantic region using spatial probability models

    USGS Publications Warehouse

    Greene, Earl A.; LaMotte, Andrew E.; Cullinan, Kerri-Ann

    2005-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency?s Regional Vulnerability Assessment Program, has developed a set of statistical tools to support regional-scale, ground-water quality and vulnerability assessments. The Regional Vulnerability Assessment Program?s goals are to develop and demonstrate approaches to comprehensive, regional-scale assessments that effectively inform managers and decision-makers as to the magnitude, extent, distribution, and uncertainty of current and anticipated environmental risks. The U.S. Geological Survey is developing and exploring the use of statistical probability models to characterize the relation between ground-water quality and geographic factors in the Mid-Atlantic Region. Available water-quality data obtained from U.S. Geological Survey National Water-Quality Assessment Program studies conducted in the Mid-Atlantic Region were used in association with geographic data (land cover, geology, soils, and others) to develop logistic-regression equations that use explanatory variables to predict the presence of a selected water-quality parameter exceeding a specified management concentration threshold. The resulting logistic-regression equations were transformed to determine the probability, P(X), of a water-quality parameter exceeding a specified management threshold. Additional statistical procedures modified by the U.S. Geological Survey were used to compare the observed values to model-predicted values at each sample point. In addition, procedures to evaluate the confidence of the model predictions and estimate the uncertainty of the probability value were developed and applied. The resulting logistic-regression models were applied to the Mid-Atlantic Region to predict the spatial probability of nitrate concentrations exceeding specified management thresholds. These thresholds are usually set or established by regulators or managers at National or local levels. At management thresholds of

  3. [Tremendous Human, Social, and Economic Losses Caused by Obstinate Application of the Failed Linear No-threshold Model].

    PubMed

    Sutou, Shizuyo

    2015-01-01

    The linear no-threshold model (LNT) was recommended in 1956, with abandonment of the traditional threshold dose-response for genetic risk assessment. Adoption of LNT by the International Commission on Radiological Protection (ICRP) became the standard for radiation regulation worldwide. The ICRP recommends a dose limit of 1 mSv/year for the public, which is too low and which terrorizes innocent people. Indeed, LNT arose mainly from the lifespan survivor study (LSS) of atomic bomb survivors. The LSS, which asserts linear dose-response and no threshold, is challenged mainly on three points. 1) Radiation doses were underestimated by half because of disregard for major residual radiation, resulting in cancer risk overestimation. 2) The dose and dose-rate effectiveness factor (DDREF) of 2 is used, but the actual DDREF is estimated as 16, resulting in cancer risk overestimation by several times. 3) Adaptive response (hormesis) is observed in leukemia and solid cancer cases, consistently contradicting the linearity of LNT. Drastic reduction of cancer risk moves the dose-response curve close to the control line, allowing the setting of a threshold. Living organisms have been evolving for 3.8 billion years under radiation exposure, naturally acquiring various defense mechanisms such as DNA repair mechanisms, apoptosis, and immune response. The failure of LNT lies in the neglect of carcinogenesis and these biological mechanisms. Obstinate application of LNT continues to cause tremendous human, social, and economic losses. The 60-year-old LNT must be rejected to establish a new scientific knowledge-based system. PMID:26521869

  4. An animal model for the analysis of cochlear blood flow [corrected] disturbance and hearing threshold in vivo.

    PubMed

    Canis, Martin; Arpornchayanon, Warangkana; Messmer, Catalina; Suckfuell, Markus; Olzowy, Bernhard; Strieth, Sebastian

    2010-02-01

    Impairment of cochlear blood flow (CBF) is considered to be important in inner ear pathology. However, direct measurement of CBF is difficult and has not been investigated in combination with hearing function. Six guinea pigs were used to show feasibility of an animal model for the analysis of cochlear microcirculation by intravital microscopy in combination with investigation of the hearing threshold by brainstem response audiometry (ABR). By the application of sodium nitroprusside (SNP), CBF was increased over 30 min. Reproducibility of measurements was shown by retest measurements. Mean baseline velocity of CBF was 109 +/- 19 mum/s. Vessel diameters had a mean value of 9.4 +/- 2.7 mum. Mean hearing threshold was 19 +/- 6 dB. In response to SNP, CBF velocity increased significantly to 161 +/- 26 mum/s. Mean arterial pressure decreased significantly to 36 +/- 11 mmHg. After the end of the application, CBF velocity recovered to a minimum of 123 +/- 17 microm/s. Within the retest, CBF velocity significantly increased to a maximum of 160 +/- 31 microm/s. Second recovery of CBF velocity was 125 +/- 14 mum/s. Within the second retest, CBF increased significantly to 157 +/- 25 microm/s. ABR thresholds did not change significantly. The increase in blood flow velocity occurred in spite of substantial hypotension as induced by a vasodilator. This may explain the fact that ABR threshold remained unchanged reflecting a maintained blood supply in this part of the brain. This technique can be used to evaluate effects of treatments aimed at cochlear microcirculation in inner ear pathologies.

  5. Using Hierarchical Cluster Models to Systematically Identify Groups of Jobs With Similar Occupational Questionnaire Response Patterns to Assist Rule-Based Expert Exposure Assessment in Population-Based Studies

    PubMed Central

    Friesen, Melissa C.; Shortreed, Susan M.; Wheeler, David C.; Burstyn, Igor; Vermeulen, Roel; Pronk, Anjoeka; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Armenti, Karla R.; Silverman, Debra T.; Yu, Kai

    2015-01-01

    Objectives: Rule-based expert exposure assessment based on questionnaire response patterns in population-based studies improves the transparency of the decisions. The number of unique response patterns, however, can be nearly equal to the number of jobs. An expert may reduce the number of patterns that need assessment using expert opinion, but each expert may identify different patterns of responses that identify an exposure scenario. Here, hierarchical clustering methods are proposed as a systematic data reduction step to reproducibly identify similar questionnaire response patterns prior to obtaining expert estimates. As a proof-of-concept, we used hierarchical clustering methods to identify groups of jobs (clusters) with similar responses to diesel exhaust-related questions and then evaluated whether the jobs within a cluster had similar (previously assessed) estimates of occupational diesel exhaust exposure. Methods: Using the New England Bladder Cancer Study as a case study, we applied hierarchical cluster models to the diesel-related variables extracted from the occupational history and job- and industry-specific questionnaires (modules). Cluster models were separately developed for two subsets: (i) 5395 jobs with ≥1 variable extracted from the occupational history indicating a potential diesel exposure scenario, but without a module with diesel-related questions; and (ii) 5929 jobs with both occupational history and module responses to diesel-relevant questions. For each subset, we varied the numbers of clusters extracted from the cluster tree developed for each model from 100 to 1000 groups of jobs. Using previously made estimates of the probability (ordinal), intensity (µg m−3 respirable elemental carbon), and frequency (hours per week) of occupational exposure to diesel exhaust, we examined the similarity of the exposure estimates for jobs within the same cluster in two ways. First, the clusters’ homogeneity (defined as >75% with the same estimate

  6. SEMICONDUCTOR DEVICES: Two-dimensional threshold voltage analytical model of DMG strained-silicon-on-insulator MOSFETs

    NASA Astrophysics Data System (ADS)

    Jin, Li; Hongxia, Liu; Bin, Li; Lei, Cao; Bo, Yuan

    2010-08-01

    For the first time, a simple and accurate two-dimensional analytical model for the surface potential variation along the channel in fully depleted dual-material gate strained-Si-on-insulator (DMG SSOI) MOSFETs is developed. We investigate the improved short channel effect (SCE), hot carrier effect (HCE), drain-induced barrier-lowering (DIBL) and carrier transport efficiency for the novel structure MOSFET. The analytical model takes into account the effects of different metal gate lengths, work functions, the drain bias and Ge mole fraction in the relaxed SiGe buffer. The surface potential in the channel region exhibits a step potential, which can suppress SCE, HCE and DIBL. Also, strained-Si and SOI structure can improve the carrier transport efficiency, with strained-Si being particularly effective. Further, the threshold voltage model correctly predicts a “rollup" in threshold voltage with decreasing channel length ratios or Ge mole fraction in the relaxed SiGe buffer. The validity of the two-dimensional analytical model is verified using numerical simulations.

  7. Applications of threshold models and the weighted bootstrap for Hungarian precipitation data

    NASA Astrophysics Data System (ADS)

    Varga, László; Rakonczai, Pál; Zempléni, András

    2016-05-01

    This paper presents applications of the peaks-over-threshold methodology for both the univariate and the recently introduced bivariate case, combined with a novel bootstrap approach. We compare the proposed bootstrap methods to the more traditional profile likelihood. We have investigated 63 years of the European Climate Assessment daily precipitation data for five Hungarian grid points, first separately for the summer and winter months, then aiming at the detection of possible changes by investigating 20 years moving windows. We show that significant changes can be observed both in the univariate and the bivariate cases, the most recent period being the most dangerous in several cases, as some return values have increased substantially. We illustrate these effects by bivariate coverage regions.

  8. Modeling of damage generation mechanisms in silicon at energies below the displacement threshold

    SciTech Connect

    Santos, Ivan; Marques, Luis A.; Pelaz, Lourdes

    2006-11-01

    We have used molecular dynamics simulation techniques to study the generation of damage in Si within the low-energy deposition regime. We have demonstrated that energy transfers below the displacement threshold can produce a significant amount of damage, usually neglected in traditional radiation damage calculations. The formation of amorphous pockets agrees with the thermal spike concept of local melting. However, we have found that the order-disorder transition is not instantaneous, but it requires some time to reach the appropriate kinetic-potential energy redistribution for melting. The competition between the rate of this energy redistribution and the energy diffusion to the surrounding atoms determines the amount of damage generated by a given deposited energy. Our findings explain the diverse damage morphology produced by ions of different masses.

  9. Fire in a Changing Climate: Stochastic versus Threshold-constrained Ignitions in a Dynamic Global Vegetation Model

    NASA Astrophysics Data System (ADS)

    Sheehan, T.; Bachelet, D. M.; Ferschweiler, K.

    2015-12-01

    The MC2 dynamic global vegetation model fire module simulates fire occurrence, area burned, and fire impacts including mortality, biomass burned, and nitrogen volatilization. Fire occurrence is based on fuel load levels and vegetation-specific thresholds for three calculated fire weather indices: fine fuel moisture code (FFMC) for the moisture content of fine fuels; build-up index (BUI) for the total amount of fuel available for combustion; and energy release component (ERC) for the total energy available to fire. Ignitions are assumed (i.e. the probability of an ignition source is 1). The model is run with gridded inputs and the fraction of each grid cell burned is limited by a vegetation-specific fire return period (FRP) and the number of years since the last fire occurred in the grid cell. One consequence of assumed ignitions FRP constraint is that similar fire behavior can take place over large areas with identical vegetation type. In regions where thresholds are often exceeded, fires occur frequently (annually in some instances) with a very low fraction of a cell burned. In areas where fire is infrequent, a single hot, dry climate event can result in intense fire over a large region. Both cases can potentially result in large areas with uniform vegetation type and age. To better reflect realistic fire occurrence, we have developed a stochastic fire occurrence model that: a) uses a map of relative ignition probability and a multiplier to alter overall ignition occurrence; b) adjusts the original fixed fire thresholds with ignition success probabilities based on fire weather indices; and c) calculates spread by using a probability based on slope and wind direction. A Monte Carlo method is used with all three algorithms to determine occurrence. The new stochastic ignition approach yields more variety in fire intensity, a smaller annual total of cells burned, and patchier vegetation.

  10. A continuum model with a percolation threshold and tunneling-assisted interfacial conductivity for carbon nanotube-based nanocomposites

    SciTech Connect

    Wang, Yang; Weng, George J.; Meguid, Shaker A.; Hamouda, Abdel Magid

    2014-05-21

    A continuum model that possesses several desirable features of the electrical conduction process in carbon-nanotube (CNT) based nanocomposites is developed. Three basic elements are included: (i) percolation threshold, (ii) interface effects, and (iii) tunneling-assisted interfacial conductivity. We approach the first one through the selection of an effective medium theory. We approach the second one by the introduction of a diminishing layer of interface with an interfacial conductivity to build a 'thinly coated' CNT. The third one is introduced through the observation that interface conductivity can be enhanced by electron tunneling which in turn can be facilitated with the formation of CNT networks. We treat this last issue in a continuum fashion by taking the network formation as a statistical process that can be represented by Cauchy's probability density function. The outcome is a simple and yet widely useful model that can simultaneously capture all these fundamental characteristics. It is demonstrated that, without considering the interface effect, the predicted conductivity would be too high, and that, without accounting for the additional contribution from the tunneling-assisted interfacial conductivity, the predicted conductivity beyond the percolation threshold would be too low. It is with the consideration of all three elements that the theory can fully account for the experimentally measured data. We further use the developed model to demonstrate that, despite the anisotropy of the intrinsic CNT conductivity, it is its axial component along the CNT direction that dominates the overall conductivity. This theory is also proved that, even with a totally insulating matrix, it is still capable of delivering non-zero conductivity beyond the percolation threshold.

  11. Threshold driven response of permafrost in Northern Eurasia to climate and environmental change: from conceptual model to quantitative assessment

    NASA Astrophysics Data System (ADS)

    Anisimov, Oleg; Kokorev, Vasiliy; Reneva, Svetlana; Shiklomanov, Nikolai

    2010-05-01

    Numerous efforts have been made to access the environmental impacts of changing climate in permafrost regions using mathematical models. Despite the significant improvements in representation of individual sub-systems, such as permafrost, vegetation, snow and hydrology, even the most comprehensive models do not replicate the coupled non-linear interactions between them that lead to threshold-driven changes. Observations indicate that ecosystems may change dramatically, rapidly, and often irreversibly, reaching fundamentally different state once they pass a critical threshold. The key to understanding permafrost threshold phenomena is interaction with other environmental factors that are very likely to change in response to climate warming. One of such factors is vegetation. Vegetation control over the thermal state of underlying ground is two-fold. Firstly, canopies have different albedo that affects the radiation balance at the soil surface. Secondly, depending on biome composition vegetation canopy may have different thermal conductivity that governs the heat fluxes between soil and atmosphere. There are clear indications based on ground observations and remote sensing that vegetation has already been changed in response to climatic warming, in consensus with the results of manipulations at experimental plots that involve artificial warming and CO2 fertilization. Under sustained warming lower vegetation (mosses, lichens) is gradually replaced by shrubs. Mosses have high thermal insolating effect in summer, which is why their retreat enhances permafrost warming. Taller shrubs accumulate snow that further warms permafrost in winter. Permafrost remains unchanged as long as responding vegetation intercepts and mitigates the climate change signal. Beyond certain threshold enhanced abundance and growth of taller vegetation leads to abrupt permafrost changes. Changes in hydrology, i.e. soil wetting or drying, may have similar effect on permafrost. Wetting increases soil

  12. Multi-host model and threshold of intermediate host Oncomelania snail density for eliminating schistosomiasis transmission in China.

    PubMed

    Zhou, Yi-Biao; Chen, Yue; Liang, Song; Song, Xiu-Xia; Chen, Geng-Xin; He, Zhong; Cai, Bin; Yihuo, Wu-Li; He, Zong-Gui; Jiang, Qing-Wu

    2016-01-01

    Schistosomiasis remains a serious public health issue in many tropical countries, with more than 700 million people at risk of infection. In China, a national integrated control strategy, aiming at blocking its transmission, has been carried out throughout endemic areas since 2005. A longitudinal study was conducted to determine the effects of different intervention measures on the transmission dynamics of S. japonicum in three study areas and the data were analyzed using a multi-host model. The multi-host model was also used to estimate the threshold of Oncomelania snail density for interrupting schistosomiasis transmission based on the longitudinal data as well as data from the national surveillance system for schistosomiasis. The data showed a continuous decline in the risk of human infection and the multi-host model fit the data well. The 25th, 50th and 75th percentiles, and the mean of estimated thresholds of Oncomelania snail density below which the schistosomiasis transmission cannot be sustained were 0.006, 0.009, 0.028 and 0.020 snails/0.11 m(2), respectively. The study results could help develop specific strategies of schistosomiasis control and elimination tailored to the local situation for each endemic area. PMID:27535177

  13. Multi-host model and threshold of intermediate host Oncomelania snail density for eliminating schistosomiasis transmission in China

    PubMed Central

    Zhou, Yi-Biao; Chen, Yue; Liang, Song; Song, Xiu-Xia; Chen, Geng-Xin; He, Zhong; Cai, Bin; Yihuo, Wu-Li; He, Zong-Gui; Jiang, Qing-Wu

    2016-01-01

    Schistosomiasis remains a serious public health issue in many tropical countries, with more than 700 million people at risk of infection. In China, a national integrated control strategy, aiming at blocking its transmission, has been carried out throughout endemic areas since 2005. A longitudinal study was conducted to determine the effects of different intervention measures on the transmission dynamics of S. japonicum in three study areas and the data were analyzed using a multi-host model. The multi-host model was also used to estimate the threshold of Oncomelania snail density for interrupting schistosomiasis transmission based on the longitudinal data as well as data from the national surveillance system for schistosomiasis. The data showed a continuous decline in the risk of human infection and the multi-host model fit the data well. The 25th, 50th and 75th percentiles, and the mean of estimated thresholds of Oncomelania snail density below which the schistosomiasis transmission cannot be sustained were 0.006, 0.009, 0.028 and 0.020 snails/0.11 m2, respectively. The study results could help develop specific strategies of schistosomiasis control and elimination tailored to the local situation for each endemic area. PMID:27535177

  14. A threshold of mechanical strain intensity for the direct activation of osteoblast function exists in a murine maxilla loading model.

    PubMed

    Suzuki, Natsuki; Aoki, Kazuhiro; Marcián, Petr; Borák, Libor; Wakabayashi, Noriyuki

    2016-10-01

    The response to the mechanical loading of bone tissue has been extensively investigated; however, precisely how much strain intensity is necessary to promote bone formation remains unclear. Combination studies utilizing histomorphometric and numerical analyses were performed using the established murine maxilla loading model to clarify the threshold of mechanical strain needed to accelerate bone formation activity. For 7 days, 191 kPa loading stimulation for 30 min/day was applied to C57BL/6J mice. Two regions of interest, the AWAY region (away from the loading site) and the NEAR region (near the loading site), were determined. The inflammatory score increased in the NEAR region, but not in the AWAY region. A strain intensity map obtained from [Formula: see text] images was superimposed onto the images of the bone formation inhibitor, sclerostin-positive cell localization. The number of sclerostin-positive cells significantly decreased after mechanical loading of more than [Formula: see text] in the AWAY region, but not in the NEAR region. The mineral apposition rate, which shows the bone formation ability of osteoblasts, was accelerated at the site of surface strain intensity, namely around [Formula: see text], but not at the site of lower surface strain intensity, which was around [Formula: see text] in the AWAY region, thus suggesting the existence of a strain intensity threshold for promoting bone formation. Taken together, our data suggest that a threshold of mechanical strain intensity for the direct activation of osteoblast function and the reduction of sclerostin exists in a murine maxilla loading model in the non-inflammatory region.

  15. Threshold corrections to the radiative breaking of electroweak symmetry and neutralino dark matter in supersymmetric seesaw model

    SciTech Connect

    Kang, Sin Kyu; Kato, Akina; Morozumi, Takuya; Yokozaki, Norimi

    2010-01-01

    We study the radiative electroweak symmetry breaking and the relic abundance of neutralino dark matter in the supersymmetric type I seesaw model. In this model, there exist threshold corrections to Higgs bilinear terms coming from heavy singlet sneutrino loops, which make the soft supersymmetry breaking (SSB) mass for up-type Higgs shift at the seesaw scale and thus a minimization condition for the Higgs potential is affected. We show that the required fine-tuning between the Higgsino mass parameter {mu} and the SSB mass for up-type Higgs may be reduced at the electroweak scale, due to the threshold corrections. We also present how the parameter {mu} depends on the SSB B-parameter for heavy singlet sneutrinos. Since the property of neutralino dark matter is quite sensitive to the size of {mu}, we discuss how the relic abundance of neutralino dark matter is affected by the SSB B-parameter. Taking the SSB B-parameter of order of a few hundreds TeV, the required relic abundance of neutralino dark matter can be correctly achieved. In this case, dark matter is a mixture of bino and Higgsino, under the condition that gaugino masses are universal at the grand unification scale.

  16. Modeling the calcium spike as a threshold triggered fixed waveform for synchronous inputs in the fluctuation regime.

    PubMed

    Chua, Yansong; Morrison, Abigail; Helias, Moritz

    2015-01-01

    Modeling the layer 5 pyramidal neuron as a system of three connected isopotential compartments, the soma, proximal, and distal compartment, with calcium spike dynamics in the distal compartment following first order kinetics, we are able to reproduce in-vitro experimental results which demonstrate the involvement of calcium spikes in action potentials generation. To explore how calcium spikes affect the neuronal output in-vivo, we emulate in-vivo like conditions by embedding the neuron model in a regime of low background fluctuations with occasional large synchronous inputs. In such a regime, a full calcium spike is only triggered by the synchronous events in a threshold like manner and has a stereotypical waveform. Hence, in such a regime, we are able to replace the calcium dynamics with a simpler threshold triggered current of fixed waveform, which is amenable to analytical treatment. We obtain analytically the mean somatic membrane potential excursion due to a calcium spike being triggered while in the fluctuating regime. Our analytical form that accounts for the covariance between conductances and the membrane potential shows a better agreement with simulation results than a naive first order approximation.

  17. Threshold of coexistence and critical behavior of a predator-prey stochastic model in a fractal landscape

    NASA Astrophysics Data System (ADS)

    Argolo, C.; Barros, P.; Tomé, T.; Arashiro, E.; Gleria, Iram; Lyra, M. L.

    2016-08-01

    We investigate a stochastic lattice model describing a predator-prey system in a fractal scale-free landscape, mimicked by the fractal Sierpinski carpet. We determine the threshold of species coexistence, that is, the critical phase boundary related to the transition between an active state, where both species coexist and an absorbing state where one of the species is extinct. We show that the predators must live longer in order to persist in a fractal habitat. We further performed a finite-size scaling analysis in the vicinity of the absorbing-state phase transition to compute a set of stationary and dynamical critical exponents. Our results indicate that the transition belongs to the directed percolation universality class exhibited by the usual contact process model on the same fractal landscape.

  18. Collaborations in Population-Based Health Research

    PubMed Central

    Lieu, Tracy A.; Hinrichsen, Virginia L.; Moreira, Andrea; Platt, Richard

    2011-01-01

    The HMO Research Network (HMORN) is a consortium of 16 health care systems with integrated research centers. Approximately 475 people participated in its 17th annual conference, hosted by the Department of Population Medicine, Harvard Pilgrim Health Care Institute and Harvard Medical School. The theme, “Collaborations in Population-Based Health Research,” reflected the network’s emphasis on collaborative studies both among its members and with external investigators. Plenary talks highlighted the initial phase of the HMORN’s work to establish the NIH-HMO Collaboratory, opportunities for public health collaborations, the work of early career investigators, and the state of the network. Platform and poster presentations showcased a broad spectrum of innovative public domain research in areas including disease epidemiology and treatment, health economics, and information technology. Special interest group sessions and ancillary meetings provided venues for informal conversation and structured work among ongoing groups, including networks in cancer, cardiovascular diseases, lung diseases, medical product safety, and mental health. PMID:22090515

  19. Threshold-like complexation of conjugated polymers with small molecule acceptors in solution within the neighbor-effect model.

    PubMed

    Sosorev, Andrey Yu; Parashchuk, Olga D; Zapunidi, Sergey A; Kashtanov, Grigoriy S; Golovnin, Ilya V; Kommanaboyina, Srikanth; Perepichka, Igor F; Paraschuk, Dmitry Yu

    2016-02-14

    In some donor-acceptor blends based on conjugated polymers, a pronounced charge-transfer complex (CTC) forms in the electronic ground state. In contrast to small-molecule donor-acceptor blends, the CTC concentration in polymer:acceptor solution can increase with the acceptor content in a threshold-like way. This threshold-like behavior was earlier attributed to the neighbor effect (NE) in the polymer complexation, i.e., next CTCs are preferentially formed near the existing ones; however, the NE origin is unknown. To address the factors affecting the NE, we record the optical absorption data for blends of the most studied conjugated polymers, poly(2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene) (MEH-PPV) and poly(3-hexylthiophene) (P3HT), with electron acceptors of fluorene series, 1,8-dinitro-9,10-antraquinone (), and 7,7,8,8-tetracyanoquinodimethane () in different solvents, and then analyze the data within the NE model. We have found that the NE depends on the polymer and acceptor molecular skeletons and solvent, while it does not depend on the acceptor electron affinity and polymer concentration. We conclude that the NE operates within a single macromolecule and stems from planarization of the polymer chain involved in the CTC with an acceptor molecule; as a result, the probability of further complexation with the next acceptor molecules at the adjacent repeat units increases. The steric and electronic microscopic mechanisms of NE are discussed.

  20. Error thresholds for Abelian quantum double models: Increasing the bit-flip stability of topological quantum memory

    NASA Astrophysics Data System (ADS)

    Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.

    2015-04-01

    Current approaches for building quantum computing devices focus on two-level quantum systems which nicely mimic the concept of a classical bit, albeit enhanced with additional quantum properties. However, rather than artificially limiting the number of states to two, the use of d -level quantum systems (qudits) could provide advantages for quantum information processing. Among other merits, it has recently been shown that multilevel quantum systems can offer increased stability to external disturbances. In this study we demonstrate that topological quantum memories built from qudits, also known as Abelian quantum double models, exhibit a substantially increased resilience to noise. That is, even when taking into account the multitude of errors possible for multilevel quantum systems, topological quantum error-correction codes employing qudits can sustain a larger error rate than their two-level counterparts. In particular, we find strong numerical evidence that the thresholds of these error-correction codes are given by the hashing bound. Considering the significantly increased error thresholds attained, this might well outweigh the added complexity of engineering and controlling higher-dimensional quantum systems.

  1. Crossing the Threshold Mindfully: Exploring Rites of Passage Models in Adventure Therapy

    ERIC Educational Resources Information Center

    Norris, Julian

    2011-01-01

    Rites of passage models, drawing from ethnographic descriptions of ritualized transition, are widespread in adventure therapy programmes. However, critical literature suggests that: (a) contemporary rites of passage models derive from a selective and sometimes misleading use of ethnographic materials, and (b) the appropriation of initiatory…

  2. Hydrodynamics of sediment threshold

    NASA Astrophysics Data System (ADS)

    Ali, Sk Zeeshan; Dey, Subhasish

    2016-07-01

    A novel hydrodynamic model for the threshold of cohesionless sediment particle motion under a steady unidirectional streamflow is presented. The hydrodynamic forces (drag and lift) acting on a solitary sediment particle resting over a closely packed bed formed by the identical sediment particles are the primary motivating forces. The drag force comprises of the form drag and form induced drag. The lift force includes the Saffman lift, Magnus lift, centrifugal lift, and turbulent lift. The points of action of the force system are appropriately obtained, for the first time, from the basics of micro-mechanics. The sediment threshold is envisioned as the rolling mode, which is the plausible mode to initiate a particle motion on the bed. The moment balance of the force system on the solitary particle about the pivoting point of rolling yields the governing equation. The conditions of sediment threshold under the hydraulically smooth, transitional, and rough flow regimes are examined. The effects of velocity fluctuations are addressed by applying the statistical theory of turbulence. This study shows that for a hindrance coefficient of 0.3, the threshold curve (threshold Shields parameter versus shear Reynolds number) has an excellent agreement with the experimental data of uniform sediments. However, most of the experimental data are bounded by the upper and lower limiting threshold curves, corresponding to the hindrance coefficients of 0.2 and 0.4, respectively. The threshold curve of this study is compared with those of previous researchers. The present model also agrees satisfactorily with the experimental data of nonuniform sediments.

  3. Universal squash model for optical communications using linear optics and threshold detectors

    SciTech Connect

    Fung, Chi-Hang Fred; Chau, H. F.; Lo, Hoi-Kwong

    2011-08-15

    Transmission of photons through open-air or optical fibers is an important primitive in quantum-information processing. Theoretical descriptions of this process often consider single photons as information carriers and thus fail to accurately describe experimental implementations where any number of photons may enter a detector. It has been a great challenge to bridge this big gap between theory and experiments. One powerful method for achieving this goal is by conceptually squashing the received multiphoton states to single-photon states. However, until now, only a few protocols admit a squash model; furthermore, a recently proven no-go theorem appears to rule out the existence of a universal squash model. Here we show that a necessary condition presumed by all existing squash models is in fact too stringent. By relaxing this condition, we find that, rather surprisingly, a universal squash model actually exists for many protocols, including quantum key distribution, quantum state tomography, Bell's inequality testing, and entanglement verification.

  4. Universal squash model for optical communications using linear optics and threshold detectors

    NASA Astrophysics Data System (ADS)

    Fung, Chi-Hang Fred; Chau, H. F.; Lo, Hoi-Kwong

    2011-08-01

    Transmission of photons through open-air or optical fibers is an important primitive in quantum-information processing. Theoretical descriptions of this process often consider single photons as information carriers and thus fail to accurately describe experimental implementations where any number of photons may enter a detector. It has been a great challenge to bridge this big gap between theory and experiments. One powerful method for achieving this goal is by conceptually squashing the received multiphoton states to single-photon states. However, until now, only a few protocols admit a squash model; furthermore, a recently proven no-go theorem appears to rule out the existence of a universal squash model. Here we show that a necessary condition presumed by all existing squash models is in fact too stringent. By relaxing this condition, we find that, rather surprisingly, a universal squash model actually exists for many protocols, including quantum key distribution, quantum state tomography, Bell's inequality testing, and entanglement verification.

  5. Endometrial cancer and antidepressants: A nationwide population-based study.

    PubMed

    Lin, Chiao-Fan; Chan, Hsiang-Lin; Hsieh, Yi-Hsuan; Liang, Hsin-Yi; Chiu, Wei-Che; Huang, Kuo-You; Lee, Yena; McIntyre, Roger S; Chen, Vincent Chin-Hung

    2016-07-01

    To our knowledge, the association between antidepressant exposure and endometrial cancer has not been previously explored. Herein, we aim to investigate the association between antidepressant prescription, including novel antidepressants, and the risk for endometrial cancer in a population-based study.Data for the analysis were derived from National Health Insurance Research Database. We identified 8392 cases with a diagnosis of endometrial cancer and 82,432 matched controls. A conditional logistic regression model was used, with adjusting for potentially confounding variables (e.g., comorbid psychiatric diseases, comorbid physical diseases, and other medications). Risk for endometrial cancer in the population-based study sample was categorized by, and assessed as a function of, antidepressant prescription and cumulative dosage.We report no association between endometrial cancer incidence and antidepressant prescription, including those prescribed either selective serotonin reuptake inhibitors (adjusted odds ratio [OR] = 0.98; 95% confidence interval [CI], 0.84-1.15) or serotonin norepinephrine reuptake inhibitors (adjusted OR = 1.14; 95% CI, 0.76-1.71). We also did not identify an association between higher cumulative doses of antidepressant prescription and endometrial cancer.There was no association between antidepressant prescription and endometrial cancer. PMID:27442640

  6. Oral Sex and HPV: Population Based Indications.

    PubMed

    Mishra, Anupam; Verma, Veerendra

    2015-03-01

    Human pappilloma virus (HPV) is well established in etiology of uterine cervical cancers, but its role in head and neck cancer is strongly suggested through many epidemiological and laboratory studies. Although HPV-16 induced oropharyngeal cancer is a distinct molecular entity, its role at other sub-sites (oral cavity, larynx, nasopharynx, hypopharynx) is less well established. Oral sex is supposedly the most commonly practiced unnatural sex across the globe and may prove to be a potential transmitting link between cancers of the uterine cervix and the oropharynx in males particularly in those 10-15% non-smokers. In India with the second largest population (higher population density than China) the oral sex is likely to be a common 'recreation-tool' amongst the majority (poor) and with the concurrent highly prevalent bad cervical/oral hygiene the HPV is likely to synergize other carcinogens. Hence in accordance (or coincidently), in India the cervical cancer happens to be the commonest cancer amongst females while oral/oropharyngeal cancer amongst males. Oral sex as a link between these two cancer types, can largely be argued considering a poor level of evidence in the existing literature. The modern world has even commercialized oral sex in the form of flavored condoms. The inadequate world literature currently is of a low level of evidence to conclude such a relationship because no such specific prospective study has been carried out and also due to wide (and unpredictable) variety of sexual practices, such a relationship can only be speculated. This article briefly reviews the existing literature on various modes and population based indications for HPV to be implicated in head and neck cancer with reference to oral sexual practice.

  7. Combining regional estimation and historical floods: A multivariate semiparametric peaks-over-threshold model with censored data

    NASA Astrophysics Data System (ADS)

    Sabourin, Anne; Renard, Benjamin

    2015-12-01

    The estimation of extreme flood quantiles is challenging due to the relative scarcity of extreme data compared to typical target return periods. Several approaches have been developed over the years to face this challenge, including regional estimation and the use of historical flood data. This paper investigates the combination of both approaches using a multivariate peaks-over-threshold model that allows estimating altogether the intersite dependence structure and the marginal distributions at each site. The joint distribution of extremes at several sites is constructed using a semiparametric Dirichlet Mixture model. The existence of partially missing and censored observations (historical data) is accounted for within a data augmentation scheme. This model is applied to a case study involving four catchments in Southern France, for which historical data are available since 1604. The comparison of marginal estimates from four versions of the model (with or without regionalizing the shape parameter; using or ignoring historical floods) highlights significant differences in terms of return level estimates. Moreover, the availability of historical data on several nearby catchments allows investigating the asymptotic dependence properties of extreme floods. Catchments display a significant amount of asymptotic dependence, calling for adapted multivariate statistical models.

  8. Dcx reexpression reduces subcortical band heterotopia and seizure threshold in an animal model of neuronal migration disorder.

    PubMed

    Manent, Jean-Bernard; Wang, Yu; Chang, Yoonjeung; Paramasivam, Murugan; LoTurco, Joseph J

    2009-01-01

    Disorders of neuronal migration can lead to malformations of the cerebral neocortex that greatly increase the risk of seizures. It remains untested whether malformations caused by disorders in neuronal migration can be reduced by reactivating cellular migration and whether such repair can decrease seizure risk. Here we show, in a rat model of subcortical band heterotopia (SBH) generated by in utero RNA interference of the Dcx gene, that aberrantly positioned neurons can be stimulated to migrate by reexpressing Dcx after birth. Restarting migration in this way both reduces neocortical malformations and restores neuronal patterning. We further find that the capacity to reduce SBH continues into early postnatal development. Moreover, intervention after birth reduces the convulsant-induced seizure threshold to a level similar to that in malformation-free controls. These results suggest that disorders of neuronal migration may be eventually treatable by reengaging developmental programs both to reduce the size of cortical malformations and to reduce seizure risk.

  9. Threshold dynamics of a time periodic reaction-diffusion epidemic model with latent period

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Wang, Zhi-Cheng; Zhao, Xiao-Qiang

    2015-05-01

    In this paper, we first propose a time-periodic reaction-diffusion epidemic model which incorporates simple demographic structure and the latent period of infectious disease. Then we introduce the basic reproduction number R0 for this model and prove that the sign of R0 - 1 determines the local stability of the disease-free periodic solution. By using the comparison arguments and persistence theory, we further show that the disease-free periodic solution is globally attractive if R0 < 1, while there is an endemic periodic solution and the disease is uniformly persistent if R0 > 1.

  10. Threshold effects in nonlinear models with an application to the social capital-retirement-health relationship.

    PubMed

    Gannon, Brenda; Harris, David; Harris, Mark

    2014-09-01

    This paper considers the relationship between social capital and health in the years before, at and after retirement. This adds to the current literature that only investigates this relationship in either the population as a whole or two subpopulations, pre-retirement and post-retirement. We now investigate if there are further additional subpopulations in the years to and from retirement. We take an information criteria approach to select the optimal model of subpopulations from a full range of potential models. This approach is similar to that proposed for linear models. Our contribution is to show how this may also be applied to nonlinear models and without the need for estimating subsequent subpopulations conditional on previous fixed subpopulations. Our main finding is that the association of social capital with health diminishes at retirement, and this decreases further 10 years after retirement. We find a strong positive significant association of social capital with health, although this turns negative after 20 years, indicating potential unobserved heterogeneity. The types of social capital may differ in later years (e.g., less volunteering) and hence overall social capital may have less of an influence on health in later years.

  11. Performance of the SWEEP model affected by estimates of threshold friction velocity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Wind Erosion Prediction System (WEPS) is a process-based model and needs to be verified under a broad range of climatic, soil, and management conditions. Occasional failure of the WEPS erosion submodel (Single-event Wind Erosion Evaluation Program or SWEEP) to simulate erosion in the Columbia Pl...

  12. Identifying Atomic Structure as a Threshold Concept: Student Mental Models and Troublesomeness

    ERIC Educational Resources Information Center

    Park, Eun Jung; Light, Gregory

    2009-01-01

    Atomic theory or the nature of matter is a principal concept in science and science education. This has, however, been complicated by the difficulty students have in learning the concept and the subsequent construction of many alternative models. To understand better the conceptual barriers to learning atomic structure, this study explores the…

  13. Critical behaviors of transverse crystal field and bimodal magnetic field mixed spin Ising model with bond dilution or bond percolation threshold

    NASA Astrophysics Data System (ADS)

    Xu, C. Q.; Yan, S. L.

    2016-10-01

    Within the effective field theory, we investigate critical behaviors of transverse crystal field and bimodal magnetic field mixed spin-1/2 and spin-1 Ising model with bond dilution or percolation threshold on a simple cubic lattice. A-type double tricritical points and zigzag reentrant phenomenon can be found at pure bond and large bimodal magnetic field status. The ordered phase is impaired sharply due to bond dilution. The positive transverse crystal field can induce ordered phase at ordinary bond percolation threshold. The bimodal magnetic field can suppress the induced ordered phase and form a series of closed ordered regions. An extraordinary bond percolation threshold is determined, at which the induced ordered phase vanishes completely. The different effects of bimodal magnetic field and bond percolation threshold on induced ordered phase are discussed.

  14. Rainfall-triggered shallow landslides at catchment scale: Threshold mechanics-based modeling for abruptness and localization

    NASA Astrophysics Data System (ADS)

    Ruette, J.; Lehmann, P.; Or, D.

    2013-10-01

    Rainfall-induced shallow landslides may occur abruptly without distinct precursors and could span a wide range of soil mass released during a triggering event. We present a rainfall-induced landslide-triggering model for steep catchments with surfaces represented as an assembly of hydrologically and mechanically interconnected soil columns. The abruptness of failure was captured by defining local strength thresholds for mechanical bonds linking soil and bedrock and adjacent columns, whereby a failure of a single bond may initiate a chain reaction of subsequent failures, culminating in local mass release (a landslide). The catchment-scale hydromechanical landslide-triggering model (CHLT) was applied to results from two event-based landslide inventories triggered by two rainfall events in 2002 and 2005 in two nearby catchments located in the Prealps in Switzerland. Rainfall radar data, surface elevation and vegetation maps, and a soil production model for soil depth distribution were used for hydromechanical modeling of failure patterns for the two rainfall events at spatial and temporal resolutions of 2.5 m and 0.02 h, respectively. The CHLT model enabled systematic evaluation of the effects of soil type, mechanical reinforcement (soil cohesion and lateral root strength), and initial soil water content on landslide characteristics. We compared various landslide metrics and spatial distribution of simulated landslides in subcatchments with observed inventory data. Model parameters were optimized for the short but intense rainfall event in 2002, and the calibrated model was then applied for the 2005 rainfall, yielding reasonable predictions of landslide events and volumes and statistically reproducing localized landslide patterns similar to inventory data. The model provides a means for identifying local hot spots and offers insights into the dynamics of locally resolved landslide hazards in mountainous regions.

  15. Permafrost landscapes in transition - towards modeling interactions, thresholds and feedbacks related to ice-rich ground

    NASA Astrophysics Data System (ADS)

    Westermann, Sebastian; Langer, Moritz; Lee, Hanna; Berntsen, Terje; Boike, Julia; Krinner, Gerhard; Aalstad, Kristoffer; Schanke Aas, Kjetil; Peter, Maria; Heikenfeld, Max; Etzelmüller, Bernd

    2016-04-01

    Thawing of permafrost is governed by a complex interplay of different processes, of which only conductive heat transfer is taken into account in most model studies. However, heat conduction alone can not account for the dynamical evolution of many permafrost landscapes, e.g. in areas rich in ground ice shaped by thermokarst ponds and lakes. Novel process parameterizations are required to include such phenomena in future projections of permafrost thaw and hereby triggered climatic feedbacks. Recently, we have demonstrated a physically-based parameterization for thaw process in ice-rich ground in the permafrost model CryoGrid 3, which can reproduce the formation of thermokarst ponds and subsidence of the ground following thawing of ice-rich subsurface layers. Long-term simulations for different subsurface stratigraphies in the Lena River Delta, Siberia, demonstrate that the hydrological regime can both accelerate and delay permafrost thawing. If meltwater from thawed ice-rich layers can drain, the ground subsides while at the same time the formation of a talik is delayed. If the meltwater pools at the surface, a pond is formed which enhances heat transfer in the ground and leads to the formation of a talik. The PERMANOR project funded by the Norwegian Research Council until 2019 will extend this work by integrating such small-scale processes in larger-scale Earth System Models (ESMs). For this purpose, the project will explore and develop statistical approaches, in particular tiling, to represent permafrost landscape dynamics on subgrid scale. Ultimately, PERMANOR will conceptualize process understanding from in-situ studies to develop new model algorithms and pursue their implementation in a coupled ESM framework.

  16. Elaborating on Threshold Concepts

    ERIC Educational Resources Information Center

    Rountree, Janet; Robins, Anthony; Rountree, Nathan

    2013-01-01

    We propose an expanded definition of Threshold Concepts (TCs) that requires the successful acquisition and internalisation not only of knowledge, but also its practical elaboration in the domains of applied strategies and mental models. This richer definition allows us to clarify the relationship between TCs and Fundamental Ideas, and to account…

  17. The threshold feeding response of microzooplankton within Pacific high-nitrate low-chlorophyll ecosystem models under steady and variable iron input

    NASA Astrophysics Data System (ADS)

    Leising, Andrew W.; Gentleman, Wendy C.; Frost, Bruce W.

    2003-11-01

    The equatorial Pacific is an HNLC (High-Nitrate Low-Chlorophyll) region. Modeling and in-situ process studies have confirmed the importance of microzooplankton grazing in this ecosystem. Unfortunately, both the parameters and functions representing microzooplankton grazing within current ecosystem models are poorly constrained. We used a simple 4-component food web model to test the assumption that a lower grazing threshold, which is common in many models, is necessary to achieve the HNLC condition. Without the grazing threshold, the model did not reproduce the HNLC condition. However, by raising the half-saturation constant within the microzooplankton functional response with no threshold, it was possible to reproduce the critical dynamics of the HNLC condition under both steady and moderate seasonal variability in nutrient input. It was also possible to reproduce the HNLC system using a sigmoidal functional response for the microzooplankton, with results somewhere between the other two forms of the model, although this version had the highest sensitivity to changes in its parameters. The three models predicted similar phytoplankton biomass and primary productivity under steady nutrient input, but diverge in these metrics as the amplitude of nutrient input variability increases. These three functional responses also imply certain important differences in the microzooplankton community. Whereas the threshold model had the least sensitivity to parameter choice, the high half-saturation constant, no-threshold model may actually be a better approximation when modeling a community of grazers. Ecosystem models that predict carbon production and export in HNLC regions can be very sensitive to assumptions concerning microzooplankton grazing; future studies need to concentrate on the functional responses of microzooplankton before these models can be used for predicting fluxes in times or regions where forcing is beyond that used to constrain the original model.

  18. Computational modeling of interventions and protective thresholds to prevent disease transmission in deploying populations.

    PubMed

    Burgess, Colleen; Peace, Angela; Everett, Rebecca; Allegri, Buena; Garman, Patrick

    2014-01-01

    Military personnel are deployed abroad for missions ranging from humanitarian relief efforts to combat actions; delay or interruption in these activities due to disease transmission can cause operational disruptions, significant economic loss, and stressed or exceeded military medical resources. Deployed troops function in environments favorable to the rapid and efficient transmission of many viruses particularly when levels of protection are suboptimal. When immunity among deployed military populations is low, the risk of vaccine-preventable disease outbreaks increases, impacting troop readiness and achievement of mission objectives. However, targeted vaccination and the optimization of preexisting immunity among deployed populations can decrease the threat of outbreaks among deployed troops. Here we describe methods for the computational modeling of disease transmission to explore how preexisting immunity compares with vaccination at the time of deployment as a means of preventing outbreaks and protecting troops and mission objectives during extended military deployment actions. These methods are illustrated with five modeling case studies for separate diseases common in many parts of the world, to show different approaches required in varying epidemiological settings. PMID:25009579

  19. Computational modeling of interventions and protective thresholds to prevent disease transmission in deploying populations.

    PubMed

    Burgess, Colleen; Peace, Angela; Everett, Rebecca; Allegri, Buena; Garman, Patrick

    2014-01-01

    Military personnel are deployed abroad for missions ranging from humanitarian relief efforts to combat actions; delay or interruption in these activities due to disease transmission can cause operational disruptions, significant economic loss, and stressed or exceeded military medical resources. Deployed troops function in environments favorable to the rapid and efficient transmission of many viruses particularly when levels of protection are suboptimal. When immunity among deployed military populations is low, the risk of vaccine-preventable disease outbreaks increases, impacting troop readiness and achievement of mission objectives. However, targeted vaccination and the optimization of preexisting immunity among deployed populations can decrease the threat of outbreaks among deployed troops. Here we describe methods for the computational modeling of disease transmission to explore how preexisting immunity compares with vaccination at the time of deployment as a means of preventing outbreaks and protecting troops and mission objectives during extended military deployment actions. These methods are illustrated with five modeling case studies for separate diseases common in many parts of the world, to show different approaches required in varying epidemiological settings.

  20. Computational Modeling of Interventions and Protective Thresholds to Prevent Disease Transmission in Deploying Populations

    PubMed Central

    2014-01-01

    Military personnel are deployed abroad for missions ranging from humanitarian relief efforts to combat actions; delay or interruption in these activities due to disease transmission can cause operational disruptions, significant economic loss, and stressed or exceeded military medical resources. Deployed troops function in environments favorable to the rapid and efficient transmission of many viruses particularly when levels of protection are suboptimal. When immunity among deployed military populations is low, the risk of vaccine-preventable disease outbreaks increases, impacting troop readiness and achievement of mission objectives. However, targeted vaccination and the optimization of preexisting immunity among deployed populations can decrease the threat of outbreaks among deployed troops. Here we describe methods for the computational modeling of disease transmission to explore how preexisting immunity compares with vaccination at the time of deployment as a means of preventing outbreaks and protecting troops and mission objectives during extended military deployment actions. These methods are illustrated with five modeling case studies for separate diseases common in many parts of the world, to show different approaches required in varying epidemiological settings. PMID:25009579

  1. The CNP signal is able to silence a supra threshold neuronal model

    PubMed Central

    Camera, Francesca; Paffi, Alessandra; Thomas, Alex W.; Apollonio, Francesca; D'Inzeo, Guglielmo; Prato, Frank S.; Liberti, Micaela

    2015-01-01

    Several experimental results published in the literature showed that weak pulsed magnetic fields affected the response of the central nervous system. However, the specific biological mechanisms that regulate the observed behaviors are still unclear and further scientific investigation is required. In this work we performed simulations on a neuronal network model exposed to a specific pulsed magnetic field signal that seems to be very effective in modulating the brain activity: the Complex Neuroelectromagnetic Pulse (CNP). Results show that CNP can silence the neurons of a feed-forward network for signal intensities that depend on the strength of the bias current, the endogenous noise level and the specific waveforms of the pulses. Therefore, it is conceivable that a neuronal network model responds to the CNP signal with an inhibition of its activity. Further studies on more realistic neuronal networks are needed to clarify if such an inhibitory effect on neuronal tissue may be the basis of the induced analgesia seen in humans and the antinociceptive effects seen in animals when exposed to the CNP. PMID:25972807

  2. Growth and recovery of temporary threshold shift at 3 kHz in bottlenose dolphins: experimental data and mathematical models.

    PubMed

    Finneran, James J; Carder, Donald A; Schlundt, Carolyn E; Dear, Randall L

    2010-05-01

    Measurements of temporary threshold shift (TTS) in marine mammals have become important components in developing safe exposure guidelines for animals exposed to intense human-generated underwater noise; however, existing marine mammal TTS data are somewhat limited in that they have typically induced small amounts of TTS. This paper presents experimental data for the growth and recovery of larger amounts of TTS (up to 23 dB) in two bottlenose dolphins (Tursiops truncatus). Exposures consisted of 3-kHz tones with durations from 4 to 128 s and sound pressure levels from 100 to 200 dB re 1 μPa. The resulting TTS data were combined with existing data from two additional dolphins to develop mathematical models for the growth and recovery of TTS. TTS growth was modeled as the product of functions of exposure duration and sound pressure level. TTS recovery was modeled using a double exponential function of the TTS at 4-min post-exposure and the recovery time. PMID:21117774

  3. Finger Vein Segmentation from Infrared Images Based on a Modified Separable Mumford Shah Model and Local Entropy Thresholding

    PubMed Central

    Vlachos, Marios; Dermatas, Evangelos

    2015-01-01

    A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern. PMID:26120357

  4. Subdural haemorrhages in infants: population based study

    PubMed Central

    Jayawant, S; Rawlinson, A; Gibbon, F; Price, J; Schulte, J; Sharples, P; Sibert, J R; Kemp, A M

    1998-01-01

    Objectives To identify the incidence, clinical outcome, and associated factors of subdural haemorrhage in children under 2 years of age, and to determine how such cases were investigated and how many were due to child abuse. Design Population based case series. Setting South Wales and south west England. Subjects Children under 2 years of age who had a subdural haemorrhage. We excluded neonates who developed subdural haemorrhage during their stay on a neonatal unit and infants who developed a subdural haemorrhage after infection or neurosurgical intervention. Main outcome measures Incidence and clinical outcome of subdural haemorrhage in infants, the number of cases caused by child abuse, the investigations such children received, and associated risk factors. Results Thirty three children (23 boys and 10 girls) were identified with subdural haemorrhage. The incidence was 12.8/100 000 children/year (95% confidence interval 5.4 to 20.2). Twenty eight cases (85%) were under 1 year of age. The incidence of subdural haemorrhage in children under 1 year of age was 21.0/100 000 children/year and was therefore higher than in the older children. The clinical outcome was poor: nine infants died and 15 had profound disability. Only 22 infants had the basic investigations of a full blood count, coagulation screen, computed tomography or magnetic resonance imaging, skeletal survey or bone scan, and ophthalmological examination. In retrospect, 27 cases (82%) were highly suggestive of abuse. Conclusion Subdural haemorrhage is common in infancy and carries a poor prognosis; three quarters of such infants die or have profound disability. Most cases are due to child abuse, but in a few the cause is unknown. Some children with subdural haemorrhage do not undergo appropriate investigations. We believe the clinical investigation of such children should include a full multidisciplinary social assessment, an ophthalmic examination, a skeletal survey supplemented with a bone scan or a

  5. Drought Risk Modeling for Thermoelectric Power Plants Siting using an Excess Over Threshold Approach

    SciTech Connect

    Bekera, Behailu B; Francis, Royce A; Omitaomu, Olufemi A

    2014-01-01

    Water availability is among the most important elements of thermoelectric power plant site selection and evaluation criteria. With increased variability and changes in hydrologic statistical stationarity, one concern is the increased occurrence of extreme drought events that may be attributable to climatic changes. As hydrological systems are altered, operators of thermoelectric power plants need to ensure a reliable supply of water for cooling and generation requirements. The effects of climate change are expected to influence hydrological systems at multiple scales, possibly leading to reduced efficiency of thermoelectric power plants. In this paper, we model drought characteristics from a thermoelectric systems operational and regulation perspective. A systematic approach to characterise a stream environment in relation to extreme drought occurrence, duration and deficit-volume is proposed and demonstrated. This approach can potentially enhance early stage decisions in identifying candidate sites for a thermoelectric power plant application and allow investigation and assessment of varying degrees of drought risk during more advanced stages of the siting process.

  6. Learning foraging thresholds for lizards

    SciTech Connect

    Goldberg, L.A.; Hart, W.E.; Wilson, D.B.

    1996-01-12

    This work gives a proof of convergence for a randomized learning algorithm that describes how anoles (lizards found in the Carribean) learn a foraging threshold distance. This model assumes that an anole will pursue a prey if and only if it is within this threshold of the anole`s perch. This learning algorithm was proposed by the biologist Roughgarden and his colleagues. They experimentally confirmed that this algorithm quickly converges to the foraging threshold that is predicted by optimal foraging theory our analysis provides an analytic confirmation that the learning algorithm converses to this optimal foraging threshold with high probability.

  7. A cross-sectional population-based study on the association of personality traits with anxiety and psychological stress: Joint modeling of mixed outcomes using shared random effects approach

    PubMed Central

    Feizi, Awat; Keshteli, Ammar Hassanzadeh; Nouri, Fatemeh; Roohafza, Hamidreza; Adibi, Peyman

    2014-01-01

    Background: Previous studies have showed some evidences about the relationship between personality traits particularly neuroticism and extroversion, separately, with psychological stress and anxiety. In the current study, we clarified the magnitude of joint interdependence (co-morbidity) of anxiety (continuous) and Psychological stress (dichotomous) as dependent variables of mixed type with five-factor personality traits as independent variables. Materials and Methods: Data from 3180 participants who attended in the cross-sectional population-based “study on the epidemiology of psychological, alimentary health and nutrition” and completed self-administered questionnaires about demographic and life style, gastrointestinal disorders, personality traits, perceived intensity of stress, social support, and psychological outcome was analyzed using shared random effect approach in R Free software. Results: The results indicated high scores of neuroticism increase the chance of high psychological stress (odds ratio [OR] = 5.1; P < 0.001) and anxiety score (B = 1.73; P < 0.001) after adjustment for the probable confounders. In contrast, those who had higher scores of extraversion and conscientiousness experienced lower levels of anxiety score (B = −0.54 and −0.23, respectively, P < 0.001) and psychological stress (OR = 0.36 and 0.65, respectively, P < 0.001). Furthermore, higher score of agreeableness had significant negative relationship with anxiety (B = −0.32, P < 0.001). Conclusion: The present study indicated that the scores of neuroticism, extraversion, agreeableness and conscientiousness strongly predict both anxiety and psychological stress in Iranian adult population. Due to likely mechanism of genetic and environmental factors on the relationships between personality traits and psychological disorders, it is suggested to perform longitudinal studies focusing on both genetic and environmental factors in Iranian population. PMID:25535497

  8. Genetic threshold hypothesis of neocortical spike-and-wave discharges in the rat: An animal model of petit mal epilepsy

    SciTech Connect

    Vadasz, C.; Fleischer, A.; Carpi, D.; Jando, G.

    1995-02-27

    Neocortical high-voltage spike-and-wave discharges (HVS) in the rat are an animal model of petit mal epilepsy. Genetic analysis of total duration of HVS (s/12 hr) in reciprocal F1 and F2 hybrids of F344 and BN rats indicated that the phenotypic variability of HVS cannot be explained by simple, monogenic Mendelian model. Biometrical analysis suggested the presence of additive, dominance, and sex-linked-epistatic effects, buffering maternal influence, and heterosis. High correlation was observed between average duration (s/episode) and frequency of occurrence of spike-and-wave episodes (n/12 hr) in parental and segregating generations, indicating that common genes affect both duration and frequency of the spike-and-wave pattern. We propose that both genetic and developmental - environmental factors control an underlying quantitative variable, which, above a certain threshold level, precipitates HVS discharges. These findings, together with the recent availability of rat DNA markers for total genome mapping, pave the way to the identification of genes that control the susceptibility of the brain to spike-and-wave discharges. 67 refs., 3 figs., 5 tabs.

  9. Threshold fatigue and information transfer

    PubMed Central

    Lindner, Benjamin; Longtin, André

    2016-01-01

    Neurons in vivo must process sensory information in the presence of significant noise. It is thus plausible to assume that neural systems have developed mechanisms to reduce this noise. Theoretical studies have shown that threshold fatigue (i.e. cumulative increases in the threshold during repetitive firing) could lead to noise reduction at certain frequencies bands and thus improved signal transmission as well as noise increases and decreased signal transmission at other frequencies: a phenomenon called noise shaping. There is, however, no experimental evidence that threshold fatigue actually occurs and, if so, that it will actually lead to noise shaping. We analyzed action potential threshold variability in intracellular recordings in vivo from pyramidal neurons in weakly electric fish and found experimental evidence for threshold fatigue: an increase in instantaneous firing rate was on average accompanied by an increase in action potential threshold. We show that, with a minor modification, the standard Hodgkin–Huxley model can reproduce this phenomenon. We next compared the performance of models with and without threshold fatigue. Our results show that threshold fatigue will lead to a more regular spike train as well as robustness to intrinsic noise via noise shaping. We finally show that the increased/reduced noise levels due to threshold fatigue correspond to decreased/increased information transmission at different frequencies. PMID:17436067

  10. Cost-effectiveness of tenofovir gel in urban South Africa: model projections of HIV impact and threshold product prices

    PubMed Central

    2014-01-01

    Background There is urgent need for effective HIV prevention methods that women can initiate. The CAPRISA 004 trial showed that a tenofovir-based vaginal microbicide had significant impact on HIV incidence among women. This study uses the trial findings to estimate the population-level impact of the gel on HIV and HSV-2 transmission, and price thresholds at which widespread product introduction would be as cost-effective as male circumcision in urban South Africa. Methods The estimated ‘per sex-act’ HIV and HSV-2 efficacies were imputed from CAPRISA 004. A dynamic HIV/STI transmission model, parameterised and fitted to Gauteng (HIV prevalence of 16.9% in 2008), South Africa, was used to estimate the impact of gel use over 15 years. Uptake was assumed to increase linearly to 30% over 10 years, with gel use in 72% of sex-acts. Full economic programme and averted HIV treatment costs were modelled. Cost per DALY averted is estimated and a microbicide price that equalises its cost-effectiveness to that of male circumcision is estimated. Results Using plausible assumptions about product introduction, we predict that tenofovir gel use could lead to a 12.5% and 4.9% reduction in HIV and HSV-2 incidence respectively, by year 15. Microbicide introduction is predicted to be highly cost-effective (under $300 per DALY averted), though the dose price would need to be just $0.12 to be equally cost-effective as male circumcision. A single dose or highly effective (83% HIV efficacy per sex-act) regimen would allow for more realistic threshold prices ($0.25 and $0.33 per dose, respectively). Conclusions These findings show that an effective coitally-dependent microbicide could reduce HIV incidence by 12.5% in this setting, if current condom use is maintained. For microbicides to be in the range of the most cost-effective HIV prevention interventions, product costs will need to decrease substantially. PMID:24405719

  11. Population-Based Study of Baseline Ethanol Consumption and Risk of Incident Essential Tremor

    PubMed Central

    Louis, Elan D.; Benito-León, Julián; Bermejo-Pareja, Félix

    2009-01-01

    Background Recent postmortem studies have demonstrated pathological changes, including Purkinje cell loss, in the cerebellum in essential tremor (ET). Toxic exposures that compromise cerebellar tissue could lower the threshold for developing ET. Ethanol is a well-established cerebellar toxin, resulting in Purkinje cell loss. Objective To test whether higher baseline ethanol consumption is a risk factor for the subsequent development of incident ET. Methods Lifetime ethanol consumption was assessed at baseline (1994-1995) in a prospective, population-based study in central Spain of 3,285 elderly participants, 76 of whom developed incident ET by follow-up (1997-1998). Results In a Cox proportional hazards model adjusting for cigarette pack-years, depressive symptoms and community, the baseline number of drink-years was marginally associated with higher risk of incident ET (relative risk, RR = 1.003, p = 0.059). In an adjusted Cox model, highest baseline drink-year quartile doubled the risk of incident ET (RR = 2.29, p = 0.018) while other quartiles were associated with more modest elevations in risk (RR3rd quartile = 1.82 [p = 0.10], RR2nd quartile = 1.75 [p = 0.10], RR1st quartile = 1.43 [p = 0.34] vs. non-drinkers [RR = 1.00]). With each higher drink-year quartile, risk of incident ET increased an average of 23% (p = 0.01, test for trend). Conclusions Higher levels of chronic ethanol consumption increased the risk of developing ET. Ethanol is often used for symptomatic relief; studies should explore whether higher consumption levels are a continued source of underlying cerebellar neurotoxicity in patients who already manifest this disease. PMID:19359288

  12. Using observations and a distributed hydrologic model to explore runoff thresholds linked with mesquite encroachment in the Sonoran Desert

    NASA Astrophysics Data System (ADS)

    Pierini, Nicole A.; Vivoni, Enrique R.; Robles-Morua, Agustin; Scott, Russell L.; Nearing, Mark A.

    2014-10-01

    Woody plant encroachment is a world wide phenomenon with implications on the hydrologic cycle at the catchment scale that are not well understood. In this study, we use observations from two small semiarid watersheds in southern Arizona that have been encroached by the velvet mesquite tree and apply a distributed hydrologic model to explore runoff threshold processes experienced during the North American monsoon. The paired watersheds have similar soil and meteorological conditions, but vary considerably in terms of vegetation cover (mesquite, grass, bare soil) and their proportions with one basin having undergone mesquite removal in 1974. Long-term observations from the watersheds exhibit changes in runoff production over time, such that the watershed with more woody plants currently has less runoff for small rainfall events, more runoff for larger events, and a larger runoff ratio during the study periods (summers 2011 and 2012). To explain this observation, we first test the distributed model, parameterized with high-resolution (1 m) terrain and vegetation distributions, against continuous data from an environmental sensor network, including an eddy covariance tower, soil moisture, and temperature profiles in different vegetation types, and runoff observations. We find good agreement between the model and observations for simultaneous water and energy states and fluxes over a range of measurement scales. We then identify that the areal fraction of grass (bare soil) cover determines the runoff response for small (large) rainfall events due to the dominant controls of antecedent wetness (hydraulic conductivity). These model-derived mechanisms explain how woody plants have differential effects on runoff in semiarid basins depending on precipitation event sizes.

  13. A semi-analytic power balance model for low (L) to high (H) mode transition power threshold

    SciTech Connect

    Singh, R.; Jhang, Hogun; Kaw, P. K.; Diamond, P. H.; Nordman, H.; Bourdelle, C.

    2014-06-15

    We present a semi-analytic model for low (L) to high (H) mode transition power threshold (P{sub th}). Two main assumptions are made in our study. First, high poloidal mode number drift resistive ballooning modes (high-m DRBM) are assumed to be the dominant turbulence driver in a narrow edge region near to last closed flux surface. Second, the pre-transition edge profile and turbulent diffusivity at the narrow edge region pertain to turbulent equipartition. An edge power balance relation is derived by calculating the dissipated power flux through both turbulent conduction and convection, and radiation in the edge region. P{sub th} is obtained by imposing the turbulence quench rule due to sheared E × B rotation. Evaluation of P{sub th} shows a good agreement with experimental results in existing machines. Increase of P{sub th} at low density (i.e., the existence of roll-over density in P{sub th} vs. density) is shown to originate from the longer scale length of the density profile than that of the temperature profile.

  14. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  15. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  16. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics

    PubMed Central

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-01-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  17. An Analytical Threshold Voltage Model of Fully Depleted (FD) Recessed-Source/Drain (Re-S/D) SOI MOSFETs with Back-Gate Control

    NASA Astrophysics Data System (ADS)

    Saramekala, Gopi Krishna; Tiwari, Pramod Kumar

    2016-10-01

    This paper presents an analytical threshold voltage model for back-gated fully depleted (FD), recessed-source drain silicon-on-insulator metal-oxide-semiconductor field-effect transistors (MOSFETs). Analytical surface potential models have been developed at front and back surfaces of the channel by solving the two-dimensional (2-D) Poisson's equation in the channel region with appropriate boundary conditions assuming a parabolic potential profile in the transverse direction of the channel. The strong inversion criterion is applied to the front surface potential as well as on the back one in order to find two separate threshold voltages for front and back channels of the device, respectively. The device threshold voltage has been assumed to be associated with the surface that offers a lower threshold voltage. The developed model was analyzed extensively for a variety of device geometry parameters like the oxide and silicon channel thicknesses, the thickness of the source/drain extension in the buried oxide, and the applied bias voltages with back-gate control. The proposed model has been validated by comparing the analytical results with numerical simulation data obtained from ATLAS™, a 2-D device simulator from SILVACO.

  18. An Analytical Threshold Voltage Model of Fully Depleted (FD) Recessed-Source/Drain (Re-S/D) SOI MOSFETs with Back-Gate Control

    NASA Astrophysics Data System (ADS)

    Saramekala, Gopi Krishna; Tiwari, Pramod Kumar

    2016-06-01

    This paper presents an analytical threshold voltage model for back-gated fully depleted (FD), recessed-source drain silicon-on-insulator metal-oxide-semiconductor field-effect transistors (MOSFETs). Analytical surface potential models have been developed at front and back surfaces of the channel by solving the two-dimensional (2-D) Poisson's equation in the channel region with appropriate boundary conditions assuming a parabolic potential profile in the transverse direction of the channel. The strong inversion criterion is applied to the front surface potential as well as on the back one in order to find two separate threshold voltages for front and back channels of the device, respectively. The device threshold voltage has been assumed to be associated with the surface that offers a lower threshold voltage. The developed model was analyzed extensively for a variety of device geometry parameters like the oxide and silicon channel thicknesses, the thickness of the source/drain extension in the buried oxide, and the applied bias voltages with back-gate control. The proposed model has been validated by comparing the analytical results with numerical simulation data obtained from ATLAS™, a 2-D device simulator from SILVACO.

  19. A threshold model for opposing actions of acetylcholine on reward behavior: Molecular mechanisms and implications for treatment of substance abuse disorders.

    PubMed

    Grasing, Kenneth

    2016-10-01

    The cholinergic system plays important roles in both learning and addiction. Medications that modify cholinergic tone can have pronounced effects on behaviors reinforced by natural and drug reinforcers. Importantly, enhancing the action of acetylcholine (ACh) in the nucleus accumbens and ventral tegmental area (VTA) dopamine system can either augment or diminish these behaviors. A threshold model is presented that can explain these seemingly contradictory results. Relatively low levels of ACh rise above a lower threshold, facilitating behaviors supported by drugs or natural reinforcers. Further increases in cholinergic tone that rise above a second upper threshold oppose the same behaviors. Accordingly, cholinesterase inhibitors, or agonists for nicotinic or muscarinic receptors, each have the potential to produce biphasic effects on reward behaviors. Pretreatment with either nicotinic or muscarinic antagonists can block drug- or food- reinforced behavior by maintaining cholinergic tone below its lower threshold. Potential threshold mediators include desensitization of nicotinic receptors and biphasic effects of ACh on the firing of medium spiny neurons. Nicotinic receptors with high- and low- affinity appear to play greater roles in reward enhancement and inhibition, respectively. Cholinergic inhibition of natural and drug rewards may serve as mediators of previously described opponent processes. Future studies should evaluate cholinergic agents across a broader range of doses, and include a variety of reinforced behaviors. PMID:27316344

  20. A two-dimensional analytical model for channel potential and threshold voltage of short channel dual material gate lightly doped drain MOSFET

    NASA Astrophysics Data System (ADS)

    Shweta, Tripathi

    2014-11-01

    An analytical model for the channel potential and the threshold voltage of the short channel dual-material-gate lightly doped drain (DMG-LDD) metal—oxide—semiconductor field-effect transistor (MOSFET) is presented using the parabolic approximation method. The proposed model takes into account the effects of the LDD region length, the LDD region doping, the lengths of the gate materials and their respective work functions, along with all the major geometrical parameters of the MOSFET. The impact of the LDD region length, the LDD region doping, and the channel length on the channel potential is studied in detail. Furthermore, the threshold voltage of the device is calculated using the minimum middle channel potential, and the result obtained is compared with the DMG MOSFET threshold voltage to show the improvement in the threshold voltage roll-off. It is shown that the DMG-LDD MOSFET structure alleviates the problem of short channel effects (SCEs) and the drain induced barrier lowering (DIBL) more efficiently. The proposed model is verified by comparing the theoretical results with the simulated data obtained by using the commercially available ATLAS™ 2D device simulator.

  1. Two-dimensional models of threshold voltage and subthreshold current for symmetrical double-material double-gate strained Si MOSFETs

    NASA Astrophysics Data System (ADS)

    Yan-hui, Xin; Sheng, Yuan; Ming-tang, Liu; Hong-xia, Liu; He-cai, Yuan

    2016-03-01

    The two-dimensional models for symmetrical double-material double-gate (DM-DG) strained Si (s-Si) metal-oxide semiconductor field effect transistors (MOSFETs) are presented. The surface potential and the surface electric field expressions have been obtained by solving Poisson’s equation. The models of threshold voltage and subthreshold current are obtained based on the surface potential expression. The surface potential and the surface electric field are compared with those of single-material double-gate (SM-DG) MOSFETs. The effects of different device parameters on the threshold voltage and the subthreshold current are demonstrated. The analytical models give deep insight into the device parameters design. The analytical results obtained from the proposed models show good matching with the simulation results using DESSIS. Project supported by the National Natural Science Foundation of China (Grant Nos. 61376099, 11235008, and 61205003).

  2. A Population-based Habitable Zone Perspective

    NASA Astrophysics Data System (ADS)

    Zsom, Andras

    2015-11-01

    What can we tell about exoplanet habitability if currently only the stellar properties, planet radius, and the incoming stellar flux are known? A planet is in the habitable zone (HZ) if it harbors liquid water on its surface. The HZ is traditionally conceived as a sharp region around stars because it is calculated for one planet with specific properties. Such an approach is limiting because the planet’s atmospheric and geophysical properties, which influence the presence of liquid water on the surface, are currently unknown but expected to be diverse. A statistical HZ description is outlined that does not favor one planet type. Instead, the stellar and planet properties are treated as random variables, and a continuous range of planet scenarios is considered. Various probability density functions are assigned to each random variable, and a combination of Monte Carlo sampling and climate modeling is used to generate synthetic exoplanet populations with known surface climates. Then, the properties of the subpopulation bearing liquid water is analyzed. Given our current observational knowledge, the HZ takes the form of a weakly constrained but smooth probability function. The HZ has an inner edge, but a clear outer edge is not seen. Currently only optimistic upper limits can be derived for the potentially observable HZ occurrence rate. Finally, we illustrate through an example how future data on exoplanet atmospheres will help to narrow down the probability that an exoplanet harbors liquid water, and we identify the greatest observational challenge in the way of finding a habitable exoplanet.

  3. Assessing interaction thresholds for trichloroethylene in combination with tetrachloroethylene and 1,1,1-trichloroethane using gas uptake studies and PBPK modeling.

    PubMed

    Dobrev, I D; Andersen, M E; Yang, R S

    2001-05-01

    The volatile organic solvents trichloroethylene (TCE), tetrachloroethylene (perchloroethylene, PERC), and 1,1,1-trichloroethane (methylchloroform, MC) are widely distributed environmental pollutants and common contaminants of many chemical waste sites. To investigate the mode of pharmacokinetic interactions among TCE, PERC, and MC and to calculate defined "interaction thresholds", gas-uptake experiments were performed using a closed-chamber exposure system. In each experiment, two rats (Fischer 344, male, 8-9 weeks old) were exposed to different initial concentrations of TCE, PERC, and MC, applied singly or as a mixture, and their concentration in the gas phase of the chamber was monitored over a period of 6 h. A physiologically based pharmacokinetic (PBPK) model was developed to test multiple mechanisms of inhibitory interactions, i.e., competitive, non-competitive, or uncompetitive. All mixture exposure data were accurately described by a system of equations in which a PBPK model was provided for each chemical and each was regarded as an inhibitor of the others' metabolism. Sensitivity-analysis techniques were used to investigate the impact of key parameters on model output and optimize experimental design. Model simulations indicated that, among these three chemicals, the inhibition was competitive. The PBPK model was extended to assess occupationally relevant exposures at or below the current threshold-limit values (TLVs). Based on 10% elevation in TCE blood levels as a criterion for significant interaction and assuming TCE exposure is set at TLV of 50 ppm, the calculated interaction thresholds for PERC and MC were 25 and 135 ppm, respectively. TLV exposures to binary TCE/PERC mixture were below the 10% significance level. The interaction threshold for TCE and MC co-exposure would be reached at 50 and 175 ppm, respectively. Such interactive PBPK models should be of value in risk assessment of occupational and environmental exposure to solvent mixtures.

  4. Men's health: a population-based study on social inequalities.

    PubMed

    Bastos, Tássia Fraga; Alves, Maria Cecília Goi Porto; Barros, Marilisa Berti de Azevedo; Cesar, Chester Luiz Galvão

    2012-11-01

    This study evaluates social inequalities in health according to level of schooling in the male population. This was a cross-sectional, population-based study with a sample of 449 men ranging from 20 to 59 years of age and living in Campinas, São Paulo State, Brazil. The chi-square test was used to verify associations, and a Poisson regression model was used to estimate crude and adjusted prevalence ratios. Men with less schooling showed higher rates of alcohol consumption and dependence, smoking, sedentary lifestyle during leisure time, and less healthy eating habits, in addition to higher prevalence of bad or very bad self-rated health, at least one chronic disease, hypertension, and other health problems. No differences were detected between the two schooling strata in terms of use of health services, except for dental services. The findings point to social inequality in health-related behaviors and in some health status indicators. However, possible equity was observed in the use of nearly all types of health services.

  5. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  6. Using Generalized Additive Modeling to Empirically Identify Thresholds within the ITERS in Relation to Toddlers' Cognitive Development

    ERIC Educational Resources Information Center

    Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana

    2013-01-01

    Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints,…

  7. Experimental and Finite Element Modeling of Near-Threshold Fatigue Crack Growth for the K-Decreasing Test Method

    NASA Technical Reports Server (NTRS)

    Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.

    2015-01-01

    The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.

  8. CARA Risk Assessment Thresholds

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  9. Evaluation of the most suitable threshold value for modelling snow glacier melt through T- index approach: the case study of Forni Glacier (Italian Alps)

    NASA Astrophysics Data System (ADS)

    Senese, Antonella; Maugeri, Maurizio; Vuillermoz, Elisa; Smiraglia, Claudio; Diolaiuti, Guglielmina

    2014-05-01

    Glacier melt occurs whenever the surface temperature is null (273.15 K) and the net energy budget is positive. These conditions can be assessed by analyzing meteorological and energy data acquired by a supraglacial Automatic Weather Station (AWS). In the case this latter is not present at the glacier surface the assessment of actual melting conditions and the evaluation of melt amount is difficult and degree-day (also named T-index) models are applied. These approaches require the choice of a correct temperature threshold. In fact, melt does not necessarily occur at daily air temperatures higher than 273.15 K, since it is determined by the energy budget which in turn is only indirectly affected by air temperature. This is the case of the late spring period when ablation processes start at the glacier surface thus progressively reducing snow thickness. In this study, to detect the most indicative air temperature threshold witnessing melt conditions in the April-June period, we analyzed air temperature data recorded from 2006 to 2012 by a supraglacial AWS (at 2631 m a.s.l.) on the ablation tongue of the Forni Glacier (Italy), and by a weather station located nearby the studied glacier (at Bormio, 1225 m a.s.l.). Moreover we evaluated the glacier energy budget (which gives the actual melt, Senese et al., 2012) and the snow water equivalent values during this time-frame. Then the ablation amount was estimated both from the surface energy balance (MEB from supraglacial AWS data) and from degree-day method (MT-INDEX, in this latter case applying the mean tropospheric lapse rate to temperature data acquired at Bormio changing the air temperature threshold) and the results were compared. We found that the mean tropospheric lapse rate permits a good and reliable reconstruction of daily glacier air temperature conditions and the major uncertainty in the computation of snow melt from degree-day models is driven by the choice of an appropriate air temperature threshold. Then

  10. Use of threshold-specific energy model for the prediction of effects of smoking and radon exposure on the risk of lung cancer.

    PubMed

    Böhm, R; Sedlák, A; Bulko, M; Holý, K

    2014-07-01

    Lung cancer is the leading cause of cancer death in both men and women. Smoking causes 80-90% of cases of lung cancer. In this study, an attempt was made to assess the impact of cigarette smoking on the risk of lung cancer by the so-called threshold-specific energy model. This model allows to analyse the biological effects of radon daughter products on the lung tissue, and is based on the assumption that the biological effect (i.e. cell inactivation) will manifest itself after the threshold-specific energy z0 deposited in the sensitive volume of the cell is exceeded. Cigarette smoking causes, among others, an increase in the synthesis of the survivin protein that protects cells from apoptosis and thereby reduces their radiosensitivity. Based on these facts, an attempt was made to estimate the shape of the curves describing the increase in the oncological effect of radiation as a function of daily cigarette consumption.

  11. Use of threshold-specific energy model for the prediction of effects of smoking and radon exposure on the risk of lung cancer.

    PubMed

    Böhm, R; Sedlák, A; Bulko, M; Holý, K

    2014-07-01

    Lung cancer is the leading cause of cancer death in both men and women. Smoking causes 80-90% of cases of lung cancer. In this study, an attempt was made to assess the impact of cigarette smoking on the risk of lung cancer by the so-called threshold-specific energy model. This model allows to analyse the biological effects of radon daughter products on the lung tissue, and is based on the assumption that the biological effect (i.e. cell inactivation) will manifest itself after the threshold-specific energy z0 deposited in the sensitive volume of the cell is exceeded. Cigarette smoking causes, among others, an increase in the synthesis of the survivin protein that protects cells from apoptosis and thereby reduces their radiosensitivity. Based on these facts, an attempt was made to estimate the shape of the curves describing the increase in the oncological effect of radiation as a function of daily cigarette consumption. PMID:24711526

  12. Guiding principles and checklist for population-based quality metrics.

    PubMed

    Krishnan, Mahesh; Brunelli, Steven M; Maddux, Franklin W; Parker, Thomas F; Johnson, Douglas; Nissenson, Allen R; Collins, Allan; Lacson, Eduardo

    2014-06-01

    The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect

  13. Coloring geographical threshold graphs

    SciTech Connect

    Bradonjic, Milan; Percus, Allon; Muller, Tobias

    2008-01-01

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyze the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.

  14. Threshold quantum cryptography

    SciTech Connect

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding.

  15. Disease management to population-based health: steps in the right direction?

    PubMed

    Sprague, Lisa

    2003-05-16

    This issue brief reviews the evolution of the disease management model and the ways it relates to care coordination and case management approaches. It also looks at examples of population-based disease management programs operating in both the private and public sectors and reviews the evidence of their success. Finally, the paper considers the policy implications of adapting this model to a Medicare fee-for-service population.

  16. Young adults' trajectories of Ecstasy use: a population based study.

    PubMed

    Smirnov, Andrew; Najman, Jake M; Hayatbakhsh, Reza; Plotnikova, Maria; Wells, Helene; Legosz, Margot; Kemp, Robert

    2013-11-01

    Young adults' Ecstasy use trajectories have important implications for individual and population-level consequences of Ecstasy use, but little relevant research has been conducted. This study prospectively examines Ecstasy trajectories in a population-based sample. Data are from the Natural History Study of Drug Use, a retrospective/prospective cohort study conducted in Australia. Population screening identified a probability sample of Ecstasy users aged 19-23 years. Complete data for 30 months of follow-up, comprising 4 time intervals, were available for 297 participants (88.4% of sample). Trajectories were derived using cluster analysis based on recent Ecstasy use at each interval. Trajectory predictors were examined using a generalized ordered logit model and included Ecstasy dependence (World Mental Health Composite International Diagnostic Instrument), psychological distress (Hospital Anxiety Depression Scale), aggression (Young Adult Self Report) and contextual factors (e.g. attendance at electronic/dance music events). Three Ecstasy trajectories were identified (low, intermediate and high use). At its peak, the high-use trajectory involved 1-2 days Ecstasy use per week. Decreasing frequency of use was observed for intermediate and high-use trajectories from 12 months, independently of market factors. Intermediate and high-use trajectory membership was predicted by past Ecstasy consumption (>70 pills) and attendance at electronic/dance music events. High-use trajectory members were unlikely to have used Ecstasy for more than 3 years and tended to report consistently positive subjective effects at baseline. Given the social context and temporal course of Ecstasy use, Ecstasy trajectories might be better understood in terms of instrumental rather than addictive drug use patterns.

  17. Increasing incidence of cataract surgery: Population-based study

    PubMed Central

    Gollogly, Heidrun E.; Hodge, David O.; St. Sauver, Jennifer L.; Erie, Jay C.

    2015-01-01

    PURPOSE To estimate the incidence of cataract surgery in a defined population and to determine longitudinal cataract surgery patterns. SETTING Mayo Clinic, Rochester, Minnesota, USA. DESIGN Cohort study. METHODS Rochester Epidemiology Project (REP) databases were used to identify all incident cataract surgeries in Olmsted County, Minnesota, between January 1, 2005, and December 31, 2011. Age-specific and sex-specific incidence rates were calculated and adjusted to the 2010 United States white population. Data were merged with previous REP data (1980 to 2004) to assess temporal trends in cataract surgery. Change in the incidence over time was assessed by fitting generalized linear models assuming a Poisson error structure. The probability of second-eye cataract surgery was calculated using the Kaplan-Meier method. RESULTS Included were 8012 cataract surgeries from 2005 through 2011. During this time, incident cataract surgery significantly increased (P < .001), peaking in 2011 with a rate of 1100 per 100 000 (95% confidence interval, 1050–1160). The probability of second-eye surgery 3, 12, and 24 months after first-eye surgery was 60%, 76%, and 86%, respectively, a significant increase compared with the same intervals in the previous 7 years (1998 to 2004) (P < .001). When merged with 1980 to 2004 REP data, incident cataract surgery steadily increased over the past 3 decades (P < .001). CONCLUSION Incident cataract surgery steadily increased over the past 32 years and has not leveled off, as reported in Swedish population-based series. Second-eye surgery was performed sooner and more frequently, with 60% of residents having second-eye surgery within 3-months of first-eye surgery. PMID:23820302

  18. Young adults' trajectories of Ecstasy use: a population based study.

    PubMed

    Smirnov, Andrew; Najman, Jake M; Hayatbakhsh, Reza; Plotnikova, Maria; Wells, Helene; Legosz, Margot; Kemp, Robert

    2013-11-01

    Young adults' Ecstasy use trajectories have important implications for individual and population-level consequences of Ecstasy use, but little relevant research has been conducted. This study prospectively examines Ecstasy trajectories in a population-based sample. Data are from the Natural History Study of Drug Use, a retrospective/prospective cohort study conducted in Australia. Population screening identified a probability sample of Ecstasy users aged 19-23 years. Complete data for 30 months of follow-up, comprising 4 time intervals, were available for 297 participants (88.4% of sample). Trajectories were derived using cluster analysis based on recent Ecstasy use at each interval. Trajectory predictors were examined using a generalized ordered logit model and included Ecstasy dependence (World Mental Health Composite International Diagnostic Instrument), psychological distress (Hospital Anxiety Depression Scale), aggression (Young Adult Self Report) and contextual factors (e.g. attendance at electronic/dance music events). Three Ecstasy trajectories were identified (low, intermediate and high use). At its peak, the high-use trajectory involved 1-2 days Ecstasy use per week. Decreasing frequency of use was observed for intermediate and high-use trajectories from 12 months, independently of market factors. Intermediate and high-use trajectory membership was predicted by past Ecstasy consumption (>70 pills) and attendance at electronic/dance music events. High-use trajectory members were unlikely to have used Ecstasy for more than 3 years and tended to report consistently positive subjective effects at baseline. Given the social context and temporal course of Ecstasy use, Ecstasy trajectories might be better understood in terms of instrumental rather than addictive drug use patterns. PMID:23899430

  19. Genetic analysis of the rates of conception using a longitudinal threshold model with random regression in dairy crossbreeding within a tropical environment.

    PubMed

    Buaban, Sayan; Kuchida, Keigo; Suzuki, Mitsuyoshi; Masuda, Yutaka; Boonkum, Wuttigrai; Duangjinda, Monchai

    2016-08-01

    This study was designed to: (i) estimate genetic parameters and breeding values for conception rates (CR) using the repeatability threshold model (RP-THM) and random regression threshold models (RR-THM); and (ii) compare covariance functions for modeling the additive genetic (AG) and permanent environmental (PE) effects in the RR-THM. The CR was defined as the outcome of an insemination. A data set of 130 592 first-lactation insemination records of 55 789 Thai dairy cows, calving between 1996 and 2011, was used in the analyses. All models included fixed effects of year × month of insemination, breed × day in milk to insemination class and age at calving. The random effects consisted of herd × year interaction, service sire, PE, AG and residual. Variance components were estimated using a Bayesian method via Gibbs sampling. Heritability estimates of CR ranged from 0.032 to 0.067, 0.037 to 0.165 and 0.045 to 0.218 for RR-THM with the second, third and fourth-order of Legendre polynomials, respectively. The heritability estimated from RP-THM was 0.056. Model comparisons based on goodness of fit, predictive abilities, predicted service results of animal, and pattern of genetic parameter estimates, indicated that the model which fit the desired outcome of insemination was the RR-THM with two regression coefficients. PMID:26556694

  20. Big Data for Population-Based Cancer Research

    PubMed Central

    Meyer, Anne-Marie; Olshan, Andrew F.; Green, Laura; Meyer, Adrian; Wheeler, Stephanie B.; Basch, Ethan; Carpenter, William R.

    2016-01-01

    The Integrated Cancer Information and Surveillance System (ICISS) facilitates population-based cancer research by developing extensive information technology systems that can link and manage large data sets. Taking an interdisciplinary “team science” approach, ICISS has developed data, systems, and methods that allow researchers to better leverage the power of big data to improve population health. PMID:25046092

  1. Threshold Concepts in Biochemistry

    ERIC Educational Resources Information Center

    Loertscher, Jennifer

    2011-01-01

    Threshold concepts can be identified for any discipline and provide a framework for linking student learning to curricular design. Threshold concepts represent a transformed understanding of a discipline, without which the learner cannot progress and are therefore pivotal in learning in a discipline. Although threshold concepts have been…

  2. Spectral CT modeling and reconstruction with hybrid detectors in dynamic-threshold-based counting and integrating modes.

    PubMed

    Li, Liang; Chen, Zhiqiang; Cong, Wenxiang; Wang, Ge

    2015-03-01

    Spectral CT with photon counting detectors can significantly improve CT performance by reducing image noise and dose, increasing contrast resolution and material specificity, as well as enabling functional and molecular imaging with existing and emerging probes. However, the current photon counting detector architecture is difficult to balance the number of energy bins and the statistical noise in each energy bin. Moreover, the hardware support for multi-energy bins demands a complex circuit which is expensive. In this paper, we promote a new scheme known as hybrid detectors that combine the dynamic-threshold-based counting and integrating modes. In this scheme, an energy threshold can be dynamically changed during a spectral CT scan, which can be considered as compressive sensing along the spectral dimension. By doing so, the number of energy bins can be retrospectively specified, even in a spatially varying fashion. To establish the feasibility and merits of such hybrid detectors, we develop a tensor-based PRISM algorithm to reconstruct a spectral CT image from dynamic dual-energy data, and perform experiments with simulated and real data, producing very promising results.

  3. Numerical investigation of a coupled moving boundary model of radial flow in low-permeable stress-sensitive reservoir with threshold pressure gradient

    NASA Astrophysics Data System (ADS)

    Wen-Chao, Liu; Yue-Wu, Liu; Cong-Cong, Niu; Guo-Feng, Han; Yi-Zhao, Wan

    2016-02-01

    The threshold pressure gradient and formation stress-sensitive effect as the two prominent physical phenomena in the development of a low-permeable reservoir are both considered here for building a new coupled moving boundary model of radial flow in porous medium. Moreover, the wellbore storage and skin effect are both incorporated into the inner boundary conditions in the model. It is known that the new coupled moving boundary model has strong nonlinearity. A coordinate transformation based fully implicit finite difference method is adopted to obtain its numerical solutions. The involved coordinate transformation can equivalently transform the dynamic flow region for the moving boundary model into a fixed region as a unit circle, which is very convenient for the model computation by the finite difference method on fixed spatial grids. By comparing the numerical solution obtained from other different numerical method in the existing literature, its validity can be verified. Eventually, the effects of permeability modulus, threshold pressure gradient, wellbore storage coefficient, and skin factor on the transient wellbore pressure, the derivative, and the formation pressure distribution are analyzed respectively. Project supported by the National Natural Science Foundation of China (Grant No. 51404232), the China Postdoctoral Science Foundation (Grant No. 2014M561074), and the National Science and Technology Major Project, China (Grant No. 2011ZX05038003).

  4. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  5. The contribution of chromosomal abnormalities to congenital heart defects: a population-based study.

    PubMed

    Hartman, Robert J; Rasmussen, Sonja A; Botto, Lorenzo D; Riehle-Colarusso, Tiffany; Martin, Christa L; Cragan, Janet D; Shin, Mikyong; Correa, Adolfo

    2011-12-01

    We aimed to assess the frequency of chromosomal abnormalities among infants with congenital heart defects (CHDs) in an analysis of population-based surveillance data. We reviewed data from the Metropolitan Atlanta Congenital Defects Program, a population-based birth-defects surveillance system, to assess the frequency of chromosomal abnormalities among live-born infants and fetal deaths with CHDs delivered from January 1, 1994, to December 31, 2005. Among 4430 infants with CHDs, 547 (12.3%) had a chromosomal abnormality. CHDs most likely to be associated with a chromosomal abnormality were interrupted aortic arch (type B and not otherwise specified; 69.2%), atrioventricular septal defect (67.2%), and double-outlet right ventricle (33.3%). The most common chromosomal abnormalities observed were trisomy 21 (52.8%), trisomy 18 (12.8%), 22q11.2 deletion (12.2%), and trisomy 13 (5.7%). In conclusion, in our study, approximately 1 in 8 infants with a CHD had a chromosomal abnormality. Clinicians should have a low threshold at which to obtain testing for chromosomal abnormalities in infants with CHDs, especially those with certain types of CHDs. Use of new technologies that have become recently available (e.g., chromosomal microarray) may increase the identified contribution of chromosomal abnormalities even further.

  6. The contribution of chromosomal abnormalities to congenital heart defects: a population-based study.

    PubMed

    Hartman, Robert J; Rasmussen, Sonja A; Botto, Lorenzo D; Riehle-Colarusso, Tiffany; Martin, Christa L; Cragan, Janet D; Shin, Mikyong; Correa, Adolfo

    2011-12-01

    We aimed to assess the frequency of chromosomal abnormalities among infants with congenital heart defects (CHDs) in an analysis of population-based surveillance data. We reviewed data from the Metropolitan Atlanta Congenital Defects Program, a population-based birth-defects surveillance system, to assess the frequency of chromosomal abnormalities among live-born infants and fetal deaths with CHDs delivered from January 1, 1994, to December 31, 2005. Among 4430 infants with CHDs, 547 (12.3%) had a chromosomal abnormality. CHDs most likely to be associated with a chromosomal abnormality were interrupted aortic arch (type B and not otherwise specified; 69.2%), atrioventricular septal defect (67.2%), and double-outlet right ventricle (33.3%). The most common chromosomal abnormalities observed were trisomy 21 (52.8%), trisomy 18 (12.8%), 22q11.2 deletion (12.2%), and trisomy 13 (5.7%). In conclusion, in our study, approximately 1 in 8 infants with a CHD had a chromosomal abnormality. Clinicians should have a low threshold at which to obtain testing for chromosomal abnormalities in infants with CHDs, especially those with certain types of CHDs. Use of new technologies that have become recently available (e.g., chromosomal microarray) may increase the identified contribution of chromosomal abnormalities even further. PMID:21728077

  7. Analytic modeling of potential and threshold voltage for short-channel thin-body fully depleted silicon-on-insulator MOSFETs with a vertical Gaussian doping profile

    NASA Astrophysics Data System (ADS)

    Wei, Sufen; Zhang, Guohe; Shao, Zhibiao; Huang, Huixiang; Geng, Li

    2016-10-01

    We verify that one-dimensional (1D) Gaussian expression is an appropriate approximation of the vertical doping profile, which is obtained by combining perpendicular ion implantation and rapid thermal annealing (RTA), for short-channel thin-body (20-30 nm) fully depleted (FD) silicon-on-insulator (SOI) MOSFETs. The two-dimensional (2D) potential distribution of the silicon film is derived by adopting the evanescent mode analysis method, in which the potential function is broken into 1D long-channel and 2D short-channel potentials. The threshold voltage model is represented by the minimum front- and back-surface potentials of the silicon film. The application of the threshold voltage model can be extended to a 12 nm channel length. The results obtained using the models match well with the 2D numerical simulation results obtained using the Synopsys Sentaurus Device™. They provide a feasible way of developing new 2D models for nonuniform nanoscale thin-body FD-SOI devices.

  8. Convergence between DSM-IV-TR and DSM-5 diagnostic models for personality disorder: evaluation of strategies for establishing diagnostic thresholds.

    PubMed

    Morey, Leslie C; Skodol, Andrew E

    2013-05-01

    The Personality and Personality Disorders Work Group for the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) recommended substantial revisions to the personality disorders (PDs) section of DSM-IV-TR, proposing a hybrid categorical-dimensional model that represented PDs as combinations of core personality dysfunctions and various configurations of maladaptive personality traits. Although the DSM-5 Task Force endorsed the proposal, the Board of Trustees of the American Psychiatric Association (APA) did not, placing the Work Group's model in DSM-5 Section III ("Emerging Measures and Models") with other concepts thought to be in need of additional research. This paper documents the impact of using this alternative model in a national sample of 337 patients as described by clinicians familiar with their cases. In particular, the analyses focus on alternative strategies considered by the Work Group for deriving decision rules, or diagnostic thresholds, with which to assign categorical diagnoses. Results demonstrate that diagnostic rules could be derived that yielded appreciable correspondence between DSM-IV-TR and proposed DSM-5 PD diagnoses-correspondence greater than that observed in the transition between DSM-III and DSM-III-R PDs. The approach also represents the most comprehensive attempt to date to provide conceptual and empirical justification for diagnostic thresholds utilized within the DSM PDs.

  9. A compact quasi 3D threshold voltage modeling and performance analysis of a novel linearly graded binary metal alloy quadruple gate MOSFET for subdued short channel effects

    NASA Astrophysics Data System (ADS)

    Sarkhel, Saheli; Sarkar, Subir Kumar

    2015-06-01

    In the present era of low power devices, to keep pace with the aggressive scaling demands, the concept of surrounding gate MOS geometry is gradually being popular among the researchers for enhancing the performance of nanoscale MOSFETs due to the inherent benefit of the gate-all-around geometry compared to the conventional planar structures. In this research endeavour, we have, for the first time, incorporated the novel theory of work function engineering of a binary metal alloy gate with continuous horizontal variation of mole fraction in a fully depleted quadruple gate MOSFET, thereby proposing a new structure namely Work Function Engineered Gate Quadruple Gate MOSFET (WFEG QG MOSFET). A detailed analytical modeling of this novel WFEG QG MOS structure has been formulated to present a quasi 3D threshold voltage model based on 3D scaling equation instead of the tedious solution of 3D Poisson's equation. The device short channel effects have been included by calculating the natural length of the proposed QG device using the effective number of gate (ENG) concept. An overall comparative performance analysis of the WFEG QG MOS and normal QG MOSFET has been done to establish the superiority of the proposed WFEG structure over its QG equivalent in terms of reduced Short Channel Effects (SCEs), Drain Induced Barrier Lowering (DIBL) and Threshold Voltage Roll Off (TVRO). The results of our analytical modeling are found to be in good agreement with the simulation results, thereby establishing the accuracy of our modeling.

  10. "A violation of the conditional independence assumption in the two-high-threshold Model of recognition memory": Correction to Chen, Starns, and Rotello (2015).

    PubMed

    2016-01-01

    Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In

  11. A threshold lung volume for optimal mechanical effects on upper airway airflow dynamics: studies in an anesthetized rabbit model.

    PubMed

    Kairaitis, Kristina; Verma, Manisha; Amatoury, Jason; Wheatley, John R; White, David P; Amis, Terence C

    2012-04-01

    Increasing lung volume improves upper airway airflow dynamics via passive mechanisms such as reducing upper airway extraluminal tissue pressures (ETP) and increasing longitudinal tension via tracheal displacement. We hypothesized a threshold lung volume for optimal mechanical effects on upper airway airflow dynamics. Seven supine, anesthetized, spontaneously breathing New Zealand White rabbits were studied. Extrathoracic pressure was altered, and lung volume change, airflow, pharyngeal pressure, ETP laterally (ETPlat) and anteriorly (ETPant), tracheal displacement, and sternohyoid muscle activity (EMG%max) monitored. Airflow dynamics were quantified via peak inspiratory airflow, flow limitation upper airway resistance, and conductance. Every 10-ml lung volume increase resulted in caudal tracheal displacement of 2.1 ± 0.4 mm (mean ± SE), decreased ETPlat by 0.7 ± 0.3 cmH(2)O, increased peak inspiratory airflow of 22.8 ± 2.6% baseline (all P < 0.02), and no significant change in ETPant or EMG%max. Flow limitation was present in most rabbits at baseline, and abolished 15.7 ± 10.5 ml above baseline. Every 10-ml lung volume decrease resulted in cranial tracheal displacement of 2.6 ± 0.4 mm, increased ETPant by 0.9 ± 0.2 cmH(2)O, ETPlat was unchanged, increased EMG%max of 11.1 ± 0.3%, and a reduction in peak inspiratory airflow of 10.8 ± 1.0%baseline (all P < 0.01). Lung volume, resistance, and conductance relationships were described by exponential functions. In conclusion, increasing lung volume displaced the trachea caudally, reduced ETP, abolished flow limitation, but had little effect on resistance or conductance, whereas decreasing lung volume resulted in cranial tracheal displacement, increased ETP and increased resistance, and reduced conductance, and flow limitation persisted despite increased muscle activity. We conclude that there is a threshold for lung volume influences on upper airway airflow dynamics. PMID:22241061

  12. Prevalence of microcephaly in Europe: population based study

    PubMed Central

    Rankin, Judith; Garne, Ester; Loane, Maria; Greenlees, Ruth; Addor, Marie-Claude; Arriola, Larraitz; Barisic, Ingeborg; Bergman, Jorieke E H; Csaky-Szunyogh, Melinda; Dias, Carlos; Draper, Elizabeth S; Gatt, Miriam; Khoshnood, Babak; Klungsoyr, Kari; Kurinczuk, Jennifer J; Lynch, Catherine; McDonnell, Robert; Nelen, Vera; Neville, Amanda J; O’Mahony, Mary T; Pierini, Anna; Randrianaivo, Hanitra; Rissmann, Anke; Tucker, David; Verellen-Dumoulin, Christine; de Walle, Hermien E K; Wellesley, Diana; Wiesel, Awi; Dolk, Helen

    2016-01-01

    Objectives To provide contemporary estimates of the prevalence of microcephaly in Europe, determine if the diagnosis of microcephaly is consistent across Europe, and evaluate whether changes in prevalence would be detected using the current European surveillance performed by EUROCAT (the European Surveillance of Congenital Anomalies). Design Questionnaire and population based observational study. Setting 24 EUROCAT registries covering 570 000 births annually in 15 countries. Participants Cases of microcephaly not associated with a genetic condition among live births, fetal deaths from 20 weeks’ gestation, and terminations of pregnancy for fetal anomaly at any gestation. Main outcome measures Prevalence of microcephaly (1 Jan 2003-31 Dec 2012) analysed with random effects Poisson regression models to account for heterogeneity across registries. Results 16 registries responded to the questionnaire, of which 44% (7/16) used the EUROCAT definition of microcephaly (a reduction in the size of the brain with a skull circumference more than 3 SD below the mean for sex, age, and ethnic origin), 19% (3/16) used a 2 SD cut off, 31% (5/16) were reliant on the criteria used by individual clinicians, and one changed criteria between 2003 and 2012. Prevalence of microcephaly in Europe was 1.53 (95% confidence interval 1.16 to 1.96) per 10 000 births, with registries varying from 0.4 (0.2 to 0.7) to 4.3 (3.6 to 5.0) per 10 000 (χ2=338, df=23, I2=93%). Registries with a 3 SD cut off reported a prevalence of 1.74 per 10 000 (0.86 to 2.93) compared with those with the less stringent 2 SD cut off of 1.21 per 10 000 (0.21 to 2.93). The prevalence of microcephaly would need to increase in one year by over 35% in Europe or by over 300% in a single registry to reach statistical significance (P<0.01). Conclusions EUROCAT could detect increases in the prevalence of microcephaly from the Zika virus of a similar magnitude to those observed in Brazil. Because of the rarity

  13. The prevalence of ADHD in a population-based sample

    PubMed Central

    Rowland, Andrew S.; Skipper, Betty J.; Umbach, David M.; Rabiner, David L.; Campbell, Richard A.; Naftel, A. Jack; Sandler, Dale P.

    2014-01-01

    Objective Few studies of ADHD prevalence have used population-based samples, multiple informants, and DSM-IV criteria. In addition, children who are asymptomatic while receiving ADHD mediction often have been misclassified. Therefore, we conducted a population-based study to estimate the prevalence of ADHD in elementary school children using DSM-IV critera. Methods We screened 7587 children for ADHD. Teachers of 81% of the children completed a DSM-IV checklist. We then interviewed parents using a structured interview (DISC). Of these, 72% participated. Parent and teacher ratings were combined to determine ADHD status. We also estimated the proportion of cases attributable to other conditions. Results Overall, 15.5% of our sample (95% confidence interval (C.I.) 14.6%-16.4%) met DSM-IV-TR criteria for ADHD. Over 40% of cases reported no previous diagnosis. With additional information, other conditions explained about 9% of cases. Conclusions The prevalence of ADHD in this population-based sample was higher than the 3-7% commonly reported. To compare study results, the methods used to implement the DSM criteria need to be standardized. PMID:24336124

  14. 40 CFR 98.441 - Reporting threshold.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) MANDATORY GREENHOUSE GAS REPORTING Geologic Sequestration of Carbon Dioxide § 98.441 Reporting threshold. (a... amount of CO2 for long-term containment in subsurface geologic formations. There is no threshold. (b... that current monitoring and model(s) show that the injected CO2 stream is not expected to migrate...

  15. New states above charm threshold

    SciTech Connect

    Eichten, Estia J.; Lane, Kenneth; Quigg, Chris; /Fermilab

    2005-11-01

    We revise and extend expectations for the properties of charmonium states that lie above charm threshold, in light of new experimental information. We refine the Cornell coupled-channel model for the coupling of c{bar c} levels to two-meson states, defining resonance masses and widths by pole positions in the complex energy plane, and suggest new targets for experiment.

  16. A case–control study relating railroad worker mortality to diesel exhaust exposure using a threshold regression model

    PubMed Central

    Lee, Mei-Ling Ting; Whitmore, G.A.; Laden, Francine; Hart, Jaime E.; Garshick, Eric

    2008-01-01

    A case–control study of lung cancer mortality in U.S. railroad workers in jobs with and without diesel exhaust exposure is reanalyzed using a new threshold regression methodology. The study included 1256 workers who died of lung cancer and 2385 controls who died primarily of circulatory system diseases. Diesel exhaust exposure was assessed using railroad job history from the US Railroad Retirement Board and an industrial hygiene survey. Smoking habits were available from next-of-kin and potential asbestos exposure was assessed by job history review. The new analysis reassesses lung cancer mortality and examines circulatory system disease mortality. Jobs with regular exposure to diesel exhaust had a survival pattern characterized by an initial delay in mortality, followed by a rapid deterioration of health prior to death. The pattern is seen in subjects dying of lung cancer, circulatory system diseases, and other causes. The unique pattern is illustrated using a new type of Kaplan–Meier survival plot in which the time scale represents a measure of disease progression rather than calendar time. The disease progression scale accounts for a healthy-worker effect when describing the effects of cumulative exposures on mortality. PMID:19221608

  17. Pausing at the Threshold

    ERIC Educational Resources Information Center

    Morgan, Patrick K.

    2015-01-01

    Since about 2003, the notion of threshold concepts--the central ideas in any field that change how learners think about other ideas--have become difficult to escape at library conferences and in general information literacy discourse. Their visibility will likely only increase because threshold concepts figure prominently in the Framework for…

  18. Threshold Concepts in Economics

    ERIC Educational Resources Information Center

    Shanahan, Martin

    2016-01-01

    Purpose: The purpose of this paper is to examine threshold concepts in the context of teaching and learning first-year university economics. It outlines some of the arguments for using threshold concepts and provides examples using opportunity cost as an exemplar in economics. Design/ Methodology/Approach: The paper provides an overview of the…

  19. The positive effects of population-based preferential sampling in environmental epidemiology.

    PubMed

    Antonelli, Joseph; Cefalu, Matthew; Bornn, Luke

    2016-10-01

    SummaryIn environmental epidemiology, exposures are not always available at subject locations and must be predicted using monitoring data. The monitor locations are often outside the control of researchers, and previous studies have shown that "preferential sampling" of monitoring locations can adversely affect exposure prediction and subsequent health effect estimation. We adopt a slightly different definition of preferential sampling than is typically seen in the literature, which we call population-based preferential sampling. Population-based preferential sampling occurs when the location of the monitors is dependent on the subject locations. We show the impact that population-based preferential sampling has on exposure prediction and health effect estimation using analytic results and a simulation study. A simple, one-parameter model is proposed to measure the degree to which monitors are preferentially sampled with respect to population density. We then discuss these concepts in the context of PM2.5 and the EPA Air Quality System monitoring sites, which are generally placed in areas of higher population density to capture the population's exposure.

  20. An obesity/cardiometabolic risk reduction disease management program: a population-based approach.

    PubMed

    Villagra, Victor G

    2009-04-01

    Obesity is a critical health concern that has captured the attention of public and private healthcare payers who are interested in controlling costs and mitigating the long-term economic consequences of the obesity epidemic. Population-based approaches to obesity management have been proposed that take advantage of a chronic care model (CCM), including patient self-care, the use of community-based resources, and the realization of care continuity through ongoing communications with patients, information technology, and public policy changes. Payer-sponsored disease management programs represent an important conduit to delivering population-based care founded on similar CCM concepts. Disease management is founded on population-based disease identification, evidence-based care protocols, and collaborative practices between clinicians. While substantial clinician training, technology infrastructure commitments, and financial support at the payer level will be needed for the success of disease management programs in obesity and cardiometabolic risk reduction, these barriers can be overcome with the proper commitment. Disease management programs represent an important tool to combat the growing societal risks of overweight and obesity.

  1. Macrolide-induced digoxin toxicity: a population-based study.

    PubMed

    Gomes, T; Mamdani, M M; Juurlink, D N

    2009-10-01

    In this 15-year, population-based, nested case-control study, we investigated the association between hospitalization for digoxin toxicity and recent exposure to individual macrolide antibiotics. Clarithromycin was associated with the highest risk of digoxin toxicity (adjusted odds ratio (OR) 14.8; 95% confidence interval (CI) 7.9-27.9), whereas erythromycin and azithromycin were associated with much lower risk (adjusted OR 3.7; 95% CI 1.7-7.9; and adjusted OR 3.7; 95% CI 1.1-12.5, respectively). We found no increased risk with a neutral comparator, cefuroxime (adjusted OR 0.8; 95% CI 0.2-3.4).

  2. Invasion Threshold in Heterogeneous Metapopulation Networks

    NASA Astrophysics Data System (ADS)

    Colizza, Vittoria; Vespignani, Alessandro

    2007-10-01

    We study the dynamics of epidemic and reaction-diffusion processes in metapopulation models with heterogeneous connectivity patterns. In susceptible-infected-removed-like processes, along with the standard local epidemic threshold, the system exhibits a global invasion threshold. We provide an explicit expression of the threshold that sets a critical value of the diffusion/mobility rate below, which the epidemic is not able to spread to a macroscopic fraction of subpopulations. The invasion threshold is found to be affected by the topological fluctuations of the metapopulation network. The results presented provide a general framework for the understanding of the effect of travel restrictions in epidemic containment.

  3. Optical information processing based on an associative-memory model of neural nets with thresholding and feedback.

    PubMed

    Psaltis, D; Farhat, N

    1985-02-01

    The remarkable collective computational properties of the Hopfield model for neural networks [Proc. Nat. Acad. Sci. USA 79, 2554 (1982)] are reviewed. These include recognition from partial input, robustness, and error-correction capability. Features of the model that make its optical implementation attractive are discussed, and specific optical implementation schemes are given.

  4. Threshold enhancement of diphoton resonances

    NASA Astrophysics Data System (ADS)

    Bharucha, Aoife; Djouadi, Abdelhak; Goudelis, Andreas

    2016-10-01

    We revisit a mechanism to enhance the decay width of (pseudo-)scalar resonances to photon pairs when the process is mediated by loops of charged fermions produced near threshold. Motivated by the recent LHC data, indicating the presence of an excess in the diphoton spectrum at approximately 750 GeV, we illustrate this threshold enhancement mechanism in the case of a 750 GeV pseudoscalar boson A with a two-photon decay mediated by a charged and uncolored fermion having a mass at the 1/2MA threshold and a small decay width, < 1 MeV. The implications of such a threshold enhancement are discussed in two explicit scenarios: i) the Minimal Supersymmetric Standard Model in which the A state is produced via the top quark mediated gluon fusion process and decays into photons predominantly through loops of charginos with masses close to 1/2MA and ii) a two Higgs doublet model in which A is again produced by gluon fusion but decays into photons through loops of vector-like charged heavy leptons. In both these scenarios, while the mass of the charged fermion has to be adjusted to be extremely close to half of the A resonance mass, the small total widths are naturally obtained if only suppressed three-body decay channels occur. Finally, the implications of some of these scenarios for dark matter are discussed.

  5. Population based mortality surveillance in carbon products manufacturing plants.

    PubMed Central

    Teta, M J; Ott, M G; Schnatter, A R

    1987-01-01

    The utility of a population based, corporate wide mortality surveillance system was evaluated after a 10 year observation period of one of the company's divisions. The subject population, 2219 white male, long term employees from Union Carbide Corporation's carbon based electrode and specialty products operations, was followed up for mortality from 1974 to 1983. External comparisons with the United States male population were supplemented with internal comparisons among subgroups of the study population, defined by broad job categories and time related variables, adjusting for important correlates of the healthy worker effect. Significant deficits of deaths were observed for all causes and the major non-cancer causes of death. The numbers of deaths due to malignant neoplasms and respiratory cancer were less than, but not statistically different from, expected. There was a non-significant excess of deaths from lymphopoietic cancer, occurring predominantly among salaried employees. When specific locations were examined, operations with potential exposure to coal tar products exhibited a mortality pattern similar to that of the total cohort. The risk for lung cancer was significantly raised (five observed, 1.4 expected) in one small, but older, location which did not involve coal tar products during the period of employment of these individuals, but which historically used asbestos materials for several unique applications. Although these findings are limited by small numbers and a short observation period, the population based surveillance strategy has provided valuable information regarding the mortality experience of the population, directions for future research, and the allocation of epidemiological resources. PMID:3593661

  6. Population-based incidence and prevalence of facioscapulohumeral dystrophy

    PubMed Central

    Arnts, Hisse; van der Maarel, Silvère M.; Padberg, George W.; Verschuuren, Jan J.G.M.; Bakker, Egbert; Weinreich, Stephanie S.; Verbeek, André L.M.; van Engelen, Baziel G.M.

    2014-01-01

    Objective: To determine the incidence and prevalence of facioscapulohumeral muscular dystrophy (FSHD) in the Netherlands. Methods: Using 3-source capture-recapture methodology, we estimated the total yearly number of newly found symptomatic individuals with FSHD, including those not registered in any of the 3 sources. To this end, symptomatic individuals with FSHD were available from 3 large population-based registries in the Netherlands if diagnosed within a 10-year period (January 1, 2001 to December 31, 2010). Multiplication of the incidence and disease duration delivered the prevalence estimate. Results: On average, 52 people are newly diagnosed with FSHD every year. This results in an incidence rate of 0.3/100,000 person-years in the Netherlands. The prevalence rate was 12/100,000, equivalent to 2,000 affected individuals. Conclusions: We present population-based incidence and prevalence estimates regarding symptomatic individuals with FSHD, including an estimation of the number of symptomatic individuals not present in any of the 3 used registries. This study shows that the total number of symptomatic persons with FSHD in the population may well be underestimated and a considerable number of affected individuals remain undiagnosed. This suggests that FSHD is one of the most prevalent neuromuscular disorders. PMID:25122204

  7. The perils of thresholding

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Pruessner, Gunnar; Moloney, Nicholas R.; Deluca, Anna

    2015-04-01

    The thresholding of time series of activity or intensity is frequently used to define and differentiate events. This is either implicit, for example due to resolution limits, or explicit, in order to filter certain small scale physics from the supposed true asymptotic events. Thresholding the birth-death process, however, introduces a scaling region into the event size distribution, which is characterized by an exponent that is unrelated to the actual asymptote and is rather an artefact of thresholding. As a result, numerical fits of simulation data produce a range of exponents, with the true asymptote visible only in the tail of the distribution. This tail is increasingly difficult to sample as the threshold is increased. In the present case, the exponents and the spurious nature of the scaling region can be determined analytically, thus demonstrating the way in which thresholding conceals the true asymptote. The analysis also suggests a procedure for detecting the influence of the threshold by means of a data collapse involving the threshold-imposed scale.

  8. Population based evaluation of a multi-parametric steroid profiling on administered endogenous steroids in single low dose.

    PubMed

    Van Renterghem, Pieter; Van Eenoo, Peter; Delbeke, Frans T

    2010-12-12

    Steroid profiling provides valuable information to detect doping with endogenous steroids. Apart from the traditionally monitored steroids, minor metabolites can play an important role to increase the specificity and efficiency of current detection methods. The applicability of several minor steroid metabolites was tested on administration studies with low doses of oral testosterone (T), T gel, dihydrotestosterone (DHT) gel and oral dehydroepiandrosterone (DHEA). The collected data for all monitored parameters were evaluated with the respective population based reference ranges. Besides the traditional markers T/E, T and DHT, minor metabolites 4-OH-Adion and 6α-OH-Adion were found as most sensitive metabolites to detect oral T administration. The most sensitive metabolites for the detection of DHEA were identified as 16α-OH-DHEA and 7β-OH-DHEA but longest detection up to three days (after oral administration of 50 mg) was obtained with non-specific 5β-steroids and its ratios. Steroids applied as a gel had longer effects on the metabolism but were generally not detectable with universal decision criteria. It can be concluded that population based reference ranges show limited overall performance in detecting misuse of small doses of natural androgens. Although some minor metabolites provide additional information for the oral testosterone and DHEA formulations, the topical administered steroids could not be detected for all volunteers using universal reference limits. Application of other population based threshold limits did not lead to longer detection times. PMID:20688095

  9. Thresholds in chemical respiratory sensitisation.

    PubMed

    Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian

    2015-07-01

    There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the

  10. Prediction model for cadmium transfer from soil to carrot (Daucus carota L.) and its application to derive soil thresholds for food safety.

    PubMed

    Ding, Changfeng; Zhang, Taolin; Wang, Xingxiang; Zhou, Fen; Yang, Yiru; Huang, Guifeng

    2013-10-30

    At present, soil quality standards used for agriculture do not fully consider the influence of soil properties on cadmium (Cd) uptake by crops. This study aimed to develop prediction models for Cd transfer from a wide range of Chinese soils to carrot (Daucus carota L.) using soil properties and the total or available soil Cd content. Path analysis showed soil pH and organic carbon (OC) content were the two most significant properties exhibiting direct effects on Cd uptake factor (ratio of Cd concentration in carrot to that in soil). Stepwise multiple linear regression analysis also showed that total soil Cd, pH, and OC were significant variables contributing to carrot Cd concentration, explaining 90% of the variance across the 21 soils. Soil thresholds for carrot (cultivar New Kuroda) cropping based on added or total Cd were then derived from the food safety standard and were presented as continuous or scenario criteria.

  11. When Do Adults Entering Higher Education Begin to Identify Themselves as Students? The Threshold-of-Induction Model

    ERIC Educational Resources Information Center

    Blair, Erik; Cline, Tony; Wallis, Jill

    2010-01-01

    In a previous study it has been suggested that there are six stages that adults move through before they feel ready to participate in higher education, and proposed a chain-of-response (COR) model to describe the process. In this study we examine the reported experiences of nine adult entrants during the second year of a work-related degree…

  12. Bayesian inference of the groundwater depth threshold in a vegetation dynamic model: a case study, lower reach, Tarim River

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The responses of eco-hydrological systems to anthropogenic and natural disturbances have attracted much attention in recent years. The coupling and simulating feedback between hydrological and ecological components have been realized in several recently developed eco-hydrological models. However, li...

  13. National nephrectomy registries: Reviewing the need for population-based data.

    PubMed

    Pearson, John; Williamson, Timothy; Ischia, Joseph; Bolton, Damien M; Frydenberg, Mark; Lawrentschuk, Nathan

    2015-09-01

    Nephrectomy is the cornerstone therapy for renal cell carcinoma (RCC) and continued refinement of the procedure through research may enhance patient outcomes. A national nephrectomy registry may provide the key information needed to assess the procedure at a national level. The aim of this study was to review nephrectomy data available at a population-based level in Australia and to benchmark these data against data from the rest of the world as an examination of the national nephrectomy registry model. A PubMed search identified records pertaining to RCC nephrectomy in Australia. A similar search identified records relating to established nephrectomy registries internationally and other surgical registries of clinical importance. These records were reviewed to address the stated aims of this article. Population-based data within Australia for nephrectomy were lacking. Key issues identified were the difficulty in benchmarking outcomes and no ongoing monitoring of trends. The care centralization debate, which questions whether small-volume centers provide comparable outcomes to high-volume centers, is ongoing. Patterns of adherence and the effectiveness of existing protocols are uncertain. A review of established international registries demonstrated that the registry model can effectively address issues comparable to those identified in the Australian literature. A national nephrectomy registry could address deficiencies identified in a given nation's nephrectomy field. The model is supported by evidence from international examples and will provide the population-based data needed for studies. Scope exists for possible integration with other registries to develop a more encompassing urological or surgical registry. Need remains for further exploration of the feasibility and practicalities of initiating such a registry including a minimum data set, outcome indicators, and auditing of data.

  14. A Threshold Continuum for Aeolian Sand Transport

    NASA Astrophysics Data System (ADS)

    Swann, C.; Ewing, R. C.; Sherman, D. J.

    2015-12-01

    The threshold of motion for aeolian sand transport marks the initial entrainment of sand particles by the force of the wind. This is typically defined and modeled as a singular wind speed for a given grain size and is based on field and laboratory experimental data. However, the definition of threshold varies significantly between these empirical models, largely because the definition is based on visual-observations of initial grain movement. For example, in his seminal experiments, Bagnold defined threshold of motion when he observed that 100% of the bed was in motion. Others have used 50% and lesser values. Differences in threshold models, in turn, result is large errors in predicting the fluxes associated with sand and dust transport. Here we use a wind tunnel and novel sediment trap to capture the fractions of sand in creep, reptation and saltation at Earth and Mars pressures and show that the threshold of motion for aeolian sand transport is best defined as a continuum in which grains progress through stages defined by the proportion of grains in creep and saltation. We propose the use of scale dependent thresholds modeled by distinct probability distribution functions that differentiate the threshold based on micro to macro scale applications. For example, a geologic timescale application corresponds to a threshold when 100% of the bed in motion whereas a sub-second application corresponds to a threshold when a single particle is set in motion. We provide quantitative measurements (number and mode of particle movement) corresponding to visual observations, percent of bed in motion and degrees of transport intermittency for Earth and Mars. Understanding transport as a continuum provides a basis for revaluating sand transport thresholds on Earth, Mars and Titan.

  15. Predictors of Childhood Anxiety: A Population-Based Cohort Study

    PubMed Central

    2015-01-01

    Background Few studies have explored predictors of early childhood anxiety. Objective To determine the prenatal, postnatal, and early life predictors of childhood anxiety by age 5. Methods Population-based, provincial administrative data (N = 19,316) from Manitoba, Canada were used to determine the association between demographic, obstetrical, psychosocial, medical, behavioral, and infant factors on childhood anxiety. Results Risk factors for childhood anxiety by age 5 included maternal psychological distress from birth to 12 months and 13 months to 5 years post-delivery and an infant 5-minute Apgar score of ≤7. Factors associated with decreased risk included maternal age < 20 years, multiparity, and preterm birth. Conclusion Identifying predictors of childhood anxiety is a key step to early detection and prevention. Maternal psychological distress is an early, modifiable risk factor. Future research should aim to disentangle early life influences on childhood anxiety occurring in the prenatal, postnatal, and early childhood periods. PMID:26158268

  16. Assessment of military population-based psychological resilience programs.

    PubMed

    Morgan, Brenda J; Bibb, Sandra C Garmon

    2011-09-01

    Active duty service members' (ADSMs) seemingly poor adaptability to traumatic stressors is a risk to force health. Enhancing the psychological resilience of ADSMs has become a key focus of Department of Defense (DoD) leaders and the numbers of military programs for enhancing psychological resilience have increased. The purpose of this article is to describe the results of an assessment conducted to determine comprehensiveness of current psychological resilience building programs that target ADSMs. A modified six-step, population-based needs assessment was used to evaluate resilience programs designed to meet the psychological needs of the ADSM population. The assessment results revealed a gap in published literature regarding program outcomes. DoD leaders may benefit from targeted predictive research that assesses program effectiveness outcomes. The necessity of including preventive, evidence-based interventions in new programs, such as positive emotion interventions shown to enhance psychological resilience in civilian samples, is also recommended.

  17. Increasing incidence of Barrett's oesophagus: a population-based study.

    PubMed

    Coleman, Helen G; Bhat, Shivaram; Murray, Liam J; McManus, Damian; Gavin, Anna T; Johnston, Brian T

    2011-09-01

    Oesophageal adenocarcinoma, a highly fatal cancer, has risen in incidence in Western societies, but it is unclear whether this is due to increasing incidence of its pre-cursor condition, Barrett's oesophagus (BO) or whether the proportion of BO patients undergoing malignant progression has increased in the face of unchanged BO incidence. Data from population-based studies of BO incidence is limited, with equivocal results to date difficult to distinguish from changes in endoscopic practices. The aim of this study was to assess population trends in Barrett's oesophagus (BO) diagnoses in relation to endoscopy and biopsy rates over a 13 year period. The Northern Ireland Barrett's oesophagus Register (NIBR) is a population-based register of all 9,329 adults diagnosed with columnar epithelium of the oesophagus in Northern Ireland between 1993 and 2005, of whom 58.3% were male. European age-standardised annual BO incidence rates were calculated per 100,000 of the population, per 100 endoscopies and per 100 endoscopies including an oesophageal biopsy. Average annual BO incidence rates rose by 159% during the study period, increasing from 23.9/100,000 during 1993-1997 to 62.0/100,000 during 2002-2005. This elevation far exceeded corresponding increases in rates of endoscopies and oesophageal biopsies being conducted. BO incidence increased most markedly in individuals aged < 60 years, and most notably amongst males aged < 40 years. This study points towards a true increase in the incidence of BO which would appear to be most marked in young males. These findings have significant implications for future rates of oesophageal adenocarcinoma and surveillance programmes.

  18. Photodissociation spectroscopy of stored CH+ ions: Detection, assignment, and close-coupled modeling of near-threshold Feshbach resonances

    NASA Astrophysics Data System (ADS)

    Hechtfischer, Ulrich; Williams, Carl J.; Lange, Michael; Linkemann, Joachim; Schwalm, Dirk; Wester, Roland; Wolf, Andreas; Zajfman, Daniel

    2002-11-01

    We have measured and theoretically analyzed a photodissociation spectrum of the CH+ molecular ion in which most observed energy levels lie within the fine-structure splitting of the C+ fragment and predissociate, and where the observed irregular line shapes and dipole-forbidden transitions indicate that nonadiabatic interactions lead to multichannel dynamics. The molecules were prepared in low rotational levels J''=0-9 of the vibrational ground state X 1Sigma+ (v'')=0 by storing a CH+ beam at 7.1 MeV in the heavy-ion storage ring TSR for up to 30 s, which was sufficient for the ions to rovibrationally thermalize to room temperature by spontaneous infrared emission. The internally cold molecules were irradiated with a dye laser at photon energies between 31 600-33 400 cm-1, and the resulting C+ fragments were counted with a particle detector. The photodissociation cross section displays the numerous Feshbach resonances between the two C+ fine-structure states predicted by theory for low rotation. The data are analyzed in two steps. First, from the overall structure of the spectrum, by identifying branches, and by a Le Roy-Bernstein analysis of level spacings we determine the dissociation energy D0=(32 946.7plus-or-minus1.1) cm-1 (with respect to the lower fine-structure limit) and assign the strongest features to the vibrational levels v'=11-14 of the dipole-allowed A 1Pi state. The majority of the 66 observed resonances cannot be assigned in this way. Therefore, in a second step, the complete spectrum is simulated with a close-coupling model, starting from recent ab initio Born-Oppenheimer potentials. For the long-range induction, dispersion and exchange energies, we propose an analytical expression and derive the C6 coefficients. After a systematic variation of just the vibrational defects of the four Born-Oppenheimer potentials involved, the close-coupling model yields a quantitative fit to the measured cross section in all detail, and is used to assign most of

  19. A threshold hazard model for estimating serious infection risk following anti-tumor necrosis factor therapy in rheumatoid arthritis patients.

    PubMed

    Fu, Bo; Lunt, Mark; Galloway, James; Dixon, Will; Hyrich, Kimme; Symmons, Deborah

    2013-03-11

    Over recent years novel biologic agents have been developed for the treatment of rheumatoid arthritis. The most common type of biologic agent in use in the United Kingdom is the anti-tumor necrosis factor inhibitor class. To fully appreciate the potential risks of anti-tumor necrosis factor therapy in patients, knowledge about the baseline hazard (risk pattern) and the characteristics of patients associated with serious infection is important. We propose a nonproportional hazard model for estimating the infection risk, by including the drug exposure history information into the baseline hazard. We found that the infection risk reaches a peak within 1 month after drug exposure starts and then declines steadily for nearly 2 years before stabilizing out.

  20. Effective theories and thresholds in particle physics

    SciTech Connect

    Gaillard, M.K.

    1991-06-07

    The role of effective theories in probing a more fundamental underlying theory and in indicating new physics thresholds is discussed, with examples from the standard model and more speculative applications to superstring theory. 38 refs.

  1. Roots at the Percolation Threshold

    NASA Astrophysics Data System (ADS)

    Kroener, E.; Ahmed, M. A.; Kaestner, A.; Vontobel, P.; Zarebanadkouki, M.; Carminati, A.

    2014-12-01

    Much of the carbon assimilated by plants during photosynthesis is lost to the soil via rhizodepositions. One component of rhizopdeposition is mucilage, a hydrogel that dramatically alters the soil physical properties. Mucilage was assumed to explain unexpectedly low rhizosphere rewetting rates during irrigation (Carminati et al. 2010) and temporarily water repellency in the rhizosphere after severe drying (Moradi et al. 2012).Here, we present an experimental and theoretical study for the rewetting behaviour of a soil mixed with mucilage, which was used as an analogue of the rhizosphere. Our samples were made of two layers of untreated soils separated by a thin layer (ca. 1 mm) of soil treated with mucilage. We prepared soil columns of varying particle size, mucilage concentration and height of the middle layer above the water table. The dry soil columns were re-wetted by capillary rise from the bottom.The rewetting of the middle layer showed a distinct dual behavior. For mucilage concentrations lower than a certain threshold, water could cross the thin layer almost immediately after rewetting of bulk soil. At slightly higher mucilage concentrations, the thin layer was almost impermeable. The mucilage concentration at the threshold strongly depended on particle size: the smaller the particle size the larger the soil specific surface and the more mucilage was needed to cover the entire particle surface and to induce water repellency.We applied a classic pore network model to simulate the experimental observations. In the model a certain fraction of nodes were randomly disconnected to reproduce the effect of mucilage in temporarily blocking the flow. The percolation model could qualitatively reproduce well the threshold characteristics of the experiments. Our experiments, together with former observations of water dynamics in the rhizosphere, suggest that the rhizosphere is near the percolation threshold, where small variations in mucilage concentration sensitively

  2. SU-D-9A-02: Relative Effects of Threshold Choice and Spatial Resolution Modeling On SUV and Volume Quantification in F18-FDG PET Imaging of Anal Cancer Patients

    SciTech Connect

    Zhao, F; Bowsher, J; Palta, M; Czito, B; Willett, C; Yin, F

    2014-06-01

    Purpose: PET imaging with F18-FDG is utilized for treatment planning, treatment assessment, and prognosis. A region of interest (ROI) encompassing the tumor may be determined on the PET image, often by a threshold T on the PET standard uptake values (SUVs). Several studies have shown prognostic value for relevant ROI properties including maximum SUV value (SUVmax), metabolic tumor volume (MTV), and total glycolytic activity (TGA). The choice of threshold T may affect mean SUV value (SUVmean), MTV, and TGA. Recently spatial resolution modeling (SRM) has been introduced on many PET systems. SRM may also affect these ROI properties. The purpose of this work is to investigate the relative influence of SRM and threshold choice T on SUVmean, MTV, TGA, and SUVmax. Methods: For 9 anal cancer patients, 18F-FDG PET scans were performed prior to treatment. PET images were reconstructed by 2 iterations of Ordered Subsets Expectation Maximization (OSEM), with and without SRM. ROI contours were generated by 5 different SUV threshold values T: 2.5, 3.0, 30%, 40%, and 50% of SUVmax. Paired-samples t tests were used to compare SUVmean, MTV, and TGA (a) for SRM on versus off and (b) between each pair of threshold values T. SUVmax was also compared for SRM on versus off. Results: For almost all (57/60) comparisons of 2 different threshold values, SUVmean, MTV, and TGA showed statistically significant variation. For comparison of SRM on versus off, there were no statistically significant changes in SUVmax and TGA, but there were statistically significant changes in MTV for T=2.5 and T=3.0 and in SUVmean for all T. Conclusion: The near-universal statistical significance of threshold choice T suggests that, regarding harmonization across sites, threshold choice may be a greater concern than choice of SRM. However, broader study is warranted, e.g. other iterations of OSEM should be considered.

  3. Conflict Resolution as Near-Threshold Decision-Making: A Spiking Neural Circuit Model with Two-Stage Competition for Antisaccadic Task.

    PubMed

    Lo, Chung-Chuan; Wang, Xiao-Jing

    2016-08-01

    Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a "Stop" process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor) neural circuit mechanism with discrimination in perception. PMID:27551824

  4. Conflict Resolution as Near-Threshold Decision-Making: A Spiking Neural Circuit Model with Two-Stage Competition for Antisaccadic Task

    PubMed Central

    Wang, Xiao-Jing

    2016-01-01

    Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a “Stop” process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor) neural circuit mechanism with discrimination in perception. PMID:27551824

  5. Microstructures, percolation thresholds, and rock physical properties

    NASA Astrophysics Data System (ADS)

    Guéguen, Y.; Chelidze, T.; Le Ravalec, M.

    1997-09-01

    The physical properties (transport properties and mechanical properties) of porous/cracked rocks are mainly functions of their microstructure. In this connection the problem of critical (threshold) porosity for transport, elasticity and mechanical strength is especially important. Two dominant mathematical formalisms — effective medium theory (EMT) and percolation theory — pretend to give answers to this problem. Some of the EMT models do not predict any threshold (differential effective medium). Other EMT models (self-consistent models) do predict thresholds, but it is shown that these thresholds are fictitious and result from an extension of a theory beyond its limit of validity. The failure of EMT methods at high pores/crack concentrations is the result of clustering effects. The appropriate formalism to correctly describe the phenomenon of clustering of pores and cracks and the behaviour of a system close to its critical porosity is percolation theory. Percolation thresholds can be predicted in that case from classical site or bond percolation on regular or random lattices. The threshold values depend on the density and average size of pores/cracks so that porosity is not sufficient in general to characterize the threshold for a specific physical property. The general term 'critical porosity' should thus be used with caution and it is preferable to specify which property is concerned and what kind of microstructure is present. This term can be more safely used for a population of rocks which have an identical average shape of pores/cracks and for a given physical property.

  6. A threshold for dissipative fission

    SciTech Connect

    Thoennessen, M.; Bertsch, G.F.

    1993-09-21

    The empirical domain of validity of statistical theory is examined as applied to fission data on pre-fission data on pre-fission neutron, charged particle, and {gamma}-ray multiplicities. Systematics are found of the threshold excitation energy for the appearance of nonstatistical fission. From the data on systems with not too high fissility, the relevant phenomenological parameter is the ratio of the threshold temperature T{sub thresh} to the (temperature-dependent) fission barrier height E{sub Bar}(T). The statistical model reproduces the data for T{sub thresh}/E{sub Bar}(T) < 0.26 {plus_minus} 0.05, but underpredicts the multiplicities at higher T{sub thresh}/E{sub Bar}(T) independent of mass and fissility of the systems.

  7. Prediction-based threshold for medication alert.

    PubMed

    Kawazoe, Yoshimasa; Miyo, Kengo; Kurahashi, Issei; Sakurai, Ryota; Ohe, Kazuhiko

    2013-01-01

    This study presents a prediction-based approach to determine thresholds for a medication alert in a computerized physician order entry. Traditional static thresholds can sometimes lead to physician's alert fatigue or overlook potentially excessive medication even if the doses are belowthe configured threshold. To address this problem, we applied a random forest algorithm to develop a prediction model for medication doses, and applied a boxplot to determine the thresholds based on the prediction results. An evaluation of the eight drugs most frequently causing alerts in our hospital showed that the performances of the prediction were high, except for two drugs. It was also found that using the thresholds based on the predictions would reduce the alerts to a half of those when using the static thresholds. Notably, some cases were detected only by the prediction thresholds. The significance of the thresholds should be discussed in terms of the trade-offs between gains and losses; however, our approach, which relies on physicians' collective experiences, has practical advantages. PMID:23920550

  8. Estimating glomerular filtration rate in a population-based study

    PubMed Central

    Shankar, Anoop; Lee, Kristine E; Klein, Barbara EK; Muntner, Paul; Brazy, Peter C; Cruickshanks, Karen J; Nieto, F Javier; Danforth, Lorraine G; Schubert, Carla R; Tsai, Michael Y; Klein, Ronald

    2010-01-01

    Background: Glomerular filtration rate (GFR)-estimating equations are used to determine the prevalence of chronic kidney disease (CKD) in population-based studies. However, it has been suggested that since the commonly used GFR equations were originally developed from samples of patients with CKD, they underestimate GFR in healthy populations. Few studies have made side-by-side comparisons of the effect of various estimating equations on the prevalence estimates of CKD in a general population sample. Patients and methods: We examined a population-based sample comprising adults from Wisconsin (age, 43–86 years; 56% women). We compared the prevalence of CKD, defined as a GFR of <60 mL/min per 1.73 m2 estimated from serum creatinine, by applying various commonly used equations including the modification of diet in renal disease (MDRD) equation, Cockcroft–Gault (CG) equation, and the Mayo equation. We compared the performance of these equations against the CKD definition of cystatin C >1.23 mg/L. Results: We found that the prevalence of CKD varied widely among different GFR equations. Although the prevalence of CKD was 17.2% with the MDRD equation and 16.5% with the CG equation, it was only 4.8% with the Mayo equation. Only 24% of those identified to have GFR in the range of 50–59 mL/min per 1.73 m2 by the MDRD equation had cystatin C levels >1.23 mg/L; their mean cystatin C level was only 1 mg/L (interquartile range, 0.9–1.2 mg/L). This finding was similar for the CG equation. For the Mayo equation, 62.8% of those patients with GFR in the range of 50–59 mL/min per 1.73 m2 had cystatin C levels >1.23 mg/L; their mean cystatin C level was 1.3 mg/L (interquartile range, 1.2–1.5 mg/L). The MDRD and CG equations showed a false-positive rate of >10%. Discussion: We found that the MDRD and CG equations, the current standard to estimate GFR, appeared to overestimate the prevalence of CKD in a general population sample. PMID:20730018

  9. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem. PMID:24235281

  10. A population-based study of large granular lymphocyte leukemia

    PubMed Central

    Shah, M V; Hook, C C; Call, T G; Go, R S

    2016-01-01

    Large granular lymphocyte (LGL) leukemia is a lymphoproliferative disorder of cytotoxic cells. T-cell LGL (T-LGL) leukemia is characterized by accumulation of cytotoxic T cells in blood and infiltration of the bone marrow, liver or spleen. Population-based studies have not been reported in LGL leukemia. We present clinical characteristics, natural history and risk factors for poor survival in patients with LGL leukemia using the Surveillance, Epidemiology, and End Results Program (SEER) and the United States National Cancer Data Base (NCDB). LGL leukemia is an extremely rare disease with the incidence of 0.2 cases per 1 000 000 individuals. The median age at diagnosis was 66.5 years with females likely to be diagnosed at 3 years earlier compared with males. Analysis of patient-level data using NCDB (n=978) showed that 45% patients with T-LGL leukemia required some form of systemic treatment at the time of diagnosis. T-LGL leukemia patients have reduced survival compared with general population, with a median overall survival of 9 years. Multivariate analysis showed that age >60 years at the time of diagnosis and the presence of significant comorbidities were independent predictors of poor survival. PMID:27494824

  11. Epidemiology of Rett syndrome: a population-based registry.

    PubMed

    Kozinetz, C A; Skender, M L; MacNaughton, N; Almes, M J; Schultz, R J; Percy, A K; Glaze, D G

    1993-02-01

    The Texas Rett Syndrome Registry maintains the largest population-based registry of cases and potential cases of Rett syndrome in the world. The most precise estimate of the prevalence of Rett syndrome of 1 per 22800 (0.44/10000) females aged 2 through 18 years of age was generated from this Registry. In addition, the first prevalence figures for black and Hispanic female cases were estimated. Registry cases are actively ascertained from multiple sources. Registry staff identify presumptive cases from review of information provided to the Registry by the parent or guardian. Preliminary diagnostic evaluation includes standardized review of medical records and videotape of key behaviors. Diagnosis is confirmed at clinical evaluation. The active surveillance system is monitored with the two-source capture-recapture methodology and case ascertainment is projected. The 1990 prevalence estimate of Rett syndrome indicates that the syndrome occurs less frequently than previously estimated. Until a biologic marker for Rett syndrome is identified or a standard definition for an incident case of Rett syndrome is designated, the prevalence of Rett syndrome will remain a major investigative issue of its epidemiology, and the Registry will be an important, systematic mean to gather case material for clinical and laboratory studies providing the foundation for the development of preventive interventions.

  12. Histocompatibility antigens in a population based silicosis series.

    PubMed Central

    Kreiss, K; Danilovs, J A; Newman, L S

    1989-01-01

    Individual susceptibility to silicosis is suggested by the lack of a uniform dose response relation and by the presence of immunological epiphenomena, such as increased antibody levels and associated diseases that reflect altered immune regulation. Human leucocyte antigens (HLA) are linked with immune response capability and might indicate a possible genetic susceptibility to silicosis. Forty nine silicotic subjects were identified from chest radiographs in a population based study in Leadville, Colorado. They were interviewed for symptoms and occupational history and gave a blood specimen for HLA-A, -B, -DR, and -DQ typing and for antinuclear antibody, immune complexes, immunoglobulins, and rheumatoid factor. Silicotic subjects had twice the prevalence of B44 (45%) of the reference population and had triple the prevalence of A29 (20%), both of which were statistically significant when corrected for the number of comparisons made. No perturbations in D-region antigen frequencies were detected. B44-positive subjects were older at diagnosis and had less dyspnoea than other subjects. A29-positive subjects were more likely to have abnormal levels of IgA and had higher levels of immune complexes. This study is the first to find significant HLA antigen excesses among a series of silicotic cases and extends earlier reported hypotheses that were based on groups of antigens of which B44 and A29 are components. PMID:2818968

  13. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  14. Crossing the Writing Threshold.

    ERIC Educational Resources Information Center

    Clark, Carol Lea

    What pushes a writer over the edge of thought into text production--over what may be called "the writing threshold?" This is the moment when the thoughts in a writer's mind, the writing situation, and personal motivations create a momentum that results in a pattern of written words. There is evidence that not everyone crosses the writing threshold…

  15. Long-term daily vibration exposure alters current perception threshold (CPT) sensitivity and myelinated axons in a rat-tail model of vibration-induced injury.

    PubMed

    Krajnak, Kristine; Raju, Sandya G; Miller, G Roger; Johnson, Claud; Waugh, Stacey; Kashon, Michael L; Riley, Danny A

    2016-01-01

    Repeated exposure to hand-transmitted vibration through the use of powered hand tools may result in pain and progressive reductions in tactile sensitivity. The goal of the present study was to use an established animal model of vibration-induced injury to characterize changes in sensory nerve function and cellular mechanisms associated with these alterations. Sensory nerve function was assessed weekly using the current perception threshold test and tail-flick analgesia test in male Sprague-Dawley rats exposed to 28 d of tail vibration. After 28 d of exposure, Aβ fiber sensitivity was reduced. This reduction in sensitivity was partly attributed to structural disruption of myelin. In addition, the decrease in sensitivity was also associated with a reduction in myelin basic protein and 2',3'- cyclic nucleotide phosphodiasterase (CNPase) staining in tail nerves, and an increase in circulating calcitonin gene-related peptide (CGRP) concentrations. Changes in Aβ fiber sensitivity and CGRP concentrations may serve as early markers of vibration-induced injury in peripheral nerves. It is conceivable that these markers may be utilized to monitor sensorineural alterations in workers exposed to vibration to potentially prevent additional injury. PMID:26852665

  16. Identification of Molecular Fingerprints in Human Heat Pain Thresholds by Use of an Interactive Mixture Model R Toolbox (AdaptGauss).

    PubMed

    Ultsch, Alfred; Thrun, Michael C; Hansen-Goos, Onno; Lötsch, Jörn

    2015-10-28

    Biomedical data obtained during cell experiments, laboratory animal research, or human studies often display a complex distribution. Statistical identification of subgroups in research data poses an analytical challenge. Here were introduce an interactive R-based bioinformatics tool, called "AdaptGauss". It enables a valid identification of a biologically-meaningful multimodal structure in the data by fitting a Gaussian mixture model (GMM) to the data. The interface allows a supervised selection of the number of subgroups. This enables the expectation maximization (EM) algorithm to adapt more complex GMM than usually observed with a noninteractive approach. Interactively fitting a GMM to heat pain threshold data acquired from human volunteers revealed a distribution pattern with four Gaussian modes located at temperatures of 32.3, 37.2, 41.4, and 45.4 °C. Noninteractive fitting was unable to identify a meaningful data structure. Obtained results are compatible with known activity temperatures of different TRP ion channels suggesting the mechanistic contribution of different heat sensors to the perception of thermal pain. Thus, sophisticated analysis of the modal structure of biomedical data provides a basis for the mechanistic interpretation of the observations. As it may reflect the involvement of different TRP thermosensory ion channels, the analysis provides a starting point for hypothesis-driven laboratory experiments.

  17. Long-term daily vibration exposure alters current perception threshold (CPT) sensitivity and myelinated axons in a rat-tail model of vibration-induced injury.

    PubMed

    Krajnak, Kristine; Raju, Sandya G; Miller, G Roger; Johnson, Claud; Waugh, Stacey; Kashon, Michael L; Riley, Danny A

    2016-01-01

    Repeated exposure to hand-transmitted vibration through the use of powered hand tools may result in pain and progressive reductions in tactile sensitivity. The goal of the present study was to use an established animal model of vibration-induced injury to characterize changes in sensory nerve function and cellular mechanisms associated with these alterations. Sensory nerve function was assessed weekly using the current perception threshold test and tail-flick analgesia test in male Sprague-Dawley rats exposed to 28 d of tail vibration. After 28 d of exposure, Aβ fiber sensitivity was reduced. This reduction in sensitivity was partly attributed to structural disruption of myelin. In addition, the decrease in sensitivity was also associated with a reduction in myelin basic protein and 2',3'- cyclic nucleotide phosphodiasterase (CNPase) staining in tail nerves, and an increase in circulating calcitonin gene-related peptide (CGRP) concentrations. Changes in Aβ fiber sensitivity and CGRP concentrations may serve as early markers of vibration-induced injury in peripheral nerves. It is conceivable that these markers may be utilized to monitor sensorineural alterations in workers exposed to vibration to potentially prevent additional injury.

  18. Secondary flow structures in the presence of Type-IV stent fractures through a bent tube model for curved arteries: Effect of circulation thresholding

    NASA Astrophysics Data System (ADS)

    Hussain, Shadman; Bulusu, Kartik V.; Plesniak, Michael W.

    2013-11-01

    A common treatment for atherosclerosis is the opening of narrowed arteries resulting from obstructive lesions by angioplasty and stent implantation to restore unrestricted blood flow. ``Type-IV'' stent fractures involve complete transverse, linear fracture of stent struts, along with displacement of the stent fragments. Experimental data pertaining to secondary flows in the presence of stents that underwent ``Type-IV'' fractures in a bent artery model under physiological inflow conditions were obtained through a two-component, two-dimensional (2C-2D) PIV technique. Concomitant stent-induced flow perturbations result in secondary flow structures with complex, multi-scale morphologies and varying size-strength characteristics. Ultimately, these flow structures may have a role to play in restenosis and progression of atherosclerotic plaque. Vortex circulation thresholds were established with the goal of resolving and tracking iso-circulation secondary flow vortical structures and their morphological changes. This allowed for a parametric evaluation and quantitative representation of secondary flow structures undergoing deformation and spatial reorganization. Supported by NSF Grant No. CBET- 0828903 and GW Center for Biomimetics and Bioinspired Engineering.

  19. Real external predictivity of QSAR models. Part 2. New intercomparable thresholds for different validation criteria and the need for scatter plot inspection.

    PubMed

    Chirico, Nicola; Gramatica, Paola

    2012-08-27

    The evaluation of regression QSAR model performance, in fitting, robustness, and external prediction, is of pivotal importance. Over the past decade, different external validation parameters have been proposed: Q(F1)(2), Q(F2)(2), Q(F3)(2), r(m)(2), and the Golbraikh-Tropsha method. Recently, the concordance correlation coefficient (CCC, Lin), which simply verifies how small the differences are between experimental data and external data set predictions, independently of their range, was proposed by our group as an external validation parameter for use in QSAR studies. In our preliminary work, we demonstrated with thousands of simulated models that CCC is in good agreement with the compared validation criteria (except r(m)(2)) using the cutoff values normally applied for the acceptance of QSAR models as externally predictive. In this new work, we have studied and compared the general trends of the various criteria relative to different possible biases (scale and location shifts) in external data distributions, using a wide range of different simulated scenarios. This study, further supported by visual inspection of experimental vs predicted data scatter plots, has highlighted problems related to some criteria. Indeed, if based on the cutoff suggested by the proponent, r(m)(2) could also accept not predictive models in two of the possible biases (location, location plus scale), while in the case of scale shift bias, it appears to be the most restrictive. Moreover, Q(F1)(2) and Q(F2)(2) showed some problems in one of the possible biases (scale shift). This analysis allowed us to also propose recalibrated, and intercomparable for the same data scatter, new thresholds for each criterion in defining a QSAR model as really externally predictive in a more precautionary approach. An analysis of the results revealed that the scatter plot of experimental vs predicted external data must always be evaluated to support the statistical criteria values: in some cases high

  20. Recurrent Wheezing in Infants: A Population-Based Study.

    PubMed

    Belhassen, Manon; De Blic, Jacques; Laforest, Laurent; Laigle, Valérie; Chanut-Vogel, Céline; Lamezec, Liliane; Brouard, Jacques; Fauroux, Brigitte; de Pouvourville, Gérard; Ginoux, Marine; Van Ganse, Eric

    2016-04-01

    Recurrent wheezing (RW) has a significant impact on infants, caregivers, and society, but morbidity and related medical resource utilization (MRU) have not been thoroughly explored. The burden of RW needs to be documented with population-based data. The objective was to assess the characteristics, medical management, and MRU of RW infants identified from national claims data. Infants aged from 6 to 24 months, receiving ≥2 dispensations of respiratory drugs within 3 months, and presenting a marker of poor control (index date), were selected. During the 6 months after index date, MRU was described in the cohort and among 3 subgroups with more severe RW, defined as ≥4 dispensations of respiratory drugs, ≥3 dispensations of oral corticosteroids (OCS), or ≥1 hospitalization for respiratory symptoms. A total of 115,489 infants had RW, corresponding to 8.2% of subjects in this age group. During follow-up, 68.7% of infants received inhaled corticosteroids, but only 1.8 U (unit) were dispensed over 6 months, suggesting discontinuous use. Control was mostly inadequate: 61.7% of subjects received OCS, 80.2% antibiotics, and 71.2% short-acting beta-agonists, and medical/paramedical visits were numerous, particularly for physiotherapy. Severe RW concerned 39.0% of the cohort; 32.8% and 11.7% of infants had repeated use of respiratory drugs and OCS, respectively, and 5.5% were hospitalized for respiratory symptoms. In this real-life nation-wide study, RW was common and infants had poor control and high MRU. Interventions are needed to support adequate use of controller therapy, and to improve medical care. PMID:27082618

  1. Network problem threshold

    NASA Technical Reports Server (NTRS)

    Gejji, Raghvendra, R.

    1992-01-01

    Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.

  2. Deterministic estimation of hydrological thresholds for shallow landslide initiation and slope stability models: case study from the Somma-Vesuvius area of southern Italy

    USGS Publications Warehouse

    Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.

    2012-01-01

    interrupted. These results lead to the identification of a comprehensive hydrogeomorphological model of susceptibility to initial landslides that links morphological, stratigraphical and hydrological conditions. The calculation of intensities and durations of rainfall necessary for slope instability allowed the identification of deterministic hydrological thresholds that account for uncertainty in properties and observed rainfall intensities.

  3. Cyberbullying among Finnish adolescents – a population-based study

    PubMed Central

    2012-01-01

    Background Cyberbullying, threatening or harassing another via the internet or mobile phones, does not cause physically harm and thus the consequences are less visible. Little research has been performed on the occurrence of cyberbullying among adolescents or the perception of its seriousness. Only a few population-based studies have been published, none of which included research on the witnessing of cyberbullying. Here, we examined exposure to cyberbullying during the last year, and its frequency and perceived seriousness among 12 to 18-year-old adolescents in Finland. We studied four dimensions of cyberbullying: being a victim, bully, or both victim and bully of cyberbullying, and witnessing the cyberbullying of friends. Methods Self-administered questionnaires, including four questions on cyberbullying, were mailed to a representative sample of 12-, 14-, 16-, and 18-year-old Finns in 2009 (the Adolescent Health and Lifestyle Survey). The respondents could answer via the internet or paper questionnaire. Results The number of respondents was 5516 and the response rate was 56%. Girls more often than boys reported experiencing at least one dimension of cyberbullying during the last year. The proportion was highest among 14-year-olds and lowest among 18-year-olds of both sexes. Among girls, the most commonly encountered dimension was witnessing the cyberbullying of friends (16%); and being a victim was slightly more common than being a bully (11% vs. 9%). Among boys, an equal proportion, approximately 10%, had been a victim, a bully, or had witnessed cyberbullying. The proportion of bully-victims was 4%. Serious and disruptive cyberbullying was experienced by 2% of respondents and weekly cyberbullying by 1%; only 0.5% of respondents had been bullied weekly and considered bullying serious and disruptive. Conclusions Adolescents are commonly exposed to cyberbullying, but it is rarely frequent or considered serious or disruptive. Cyberbullying exposure differed between

  4. Familial risk of cerebral palsy: population based cohort study

    PubMed Central

    Wilcox, Allen J; Lie, Rolv T; Moster, Dag

    2014-01-01

    Objective To investigate risks of recurrence of cerebral palsy in family members with various degrees of relatedness to elucidate patterns of hereditability. Design Population based cohort study. Setting Data from the Medical Birth Registry of Norway, linked to the Norwegian social insurance scheme to identify cases of cerebral palsy and to databases of Statistics Norway to identify relatives. Participants 2 036 741 Norwegians born during 1967-2002, 3649 of whom had a diagnosis of cerebral palsy; 22 558 pairs of twins, 1 851 144 pairs of first degree relatives, 1 699 856 pairs of second degree relatives, and 5 165 968 pairs of third degree relatives were identified. Main outcome measure Cerebral palsy. Results If one twin had cerebral palsy, the relative risk of recurrence of cerebral palsy was 15.6 (95% confidence interval 9.8 to 25) in the other twin. In families with an affected singleton child, risk was increased 9.2 (6.4 to 13)-fold in a subsequent full sibling and 3.0 (1.1 to 8.6)-fold in a half sibling. Affected parents were also at increased risk of having an affected child (6.5 (1.6 to 26)-fold). No evidence was found of differential transmission through mothers or fathers, although the study had limited power to detect such differences. For people with an affected first cousin, only weak evidence existed for an increased risk (1.5 (0.9 to 2.7)-fold). Risks in siblings or cousins were independent of sex of the index case. After exclusion of preterm births (an important risk factor for cerebral palsy), familial risks remained and were often stronger. Conclusions People born into families in which someone already has cerebral palsy are themselves at elevated risk, depending on their degree of relatedness. Elevated risk may extend even to third degree relatives (first cousins). The patterns of risk suggest multifactorial inheritance, in which multiple genes interact with each other and with environmental factors. These data offer additional

  5. Estimating HIV Prevalence in Zimbabwe Using Population-Based Survey Data

    PubMed Central

    Chinomona, Amos; Mwambi, Henry Godwell

    2015-01-01

    Estimates of HIV prevalence computed using data obtained from sampling a subgroup of the national population may lack the representativeness of all the relevant domains of the population. These estimates are often computed on the assumption that HIV prevalence is uniform across all domains of the population. Use of appropriate statistical methods together with population-based survey data can enhance better estimation of national and subgroup level HIV prevalence and can provide improved explanations of the variation in HIV prevalence across different domains of the population. In this study we computed design-consistent estimates of HIV prevalence, and their respective 95% confidence intervals at both the national and subgroup levels. In addition, we provided a multivariable survey logistic regression model from a generalized linear modelling perspective for explaining the variation in HIV prevalence using demographic, socio-economic, socio-cultural and behavioural factors. Essentially, this study borrows from the proximate determinants conceptual framework which provides guiding principles upon which socio-economic and socio-cultural variables affect HIV prevalence through biological behavioural factors. We utilize the 2010–11 Zimbabwe Demographic and Health Survey (2010–11 ZDHS) data (which are population based) to estimate HIV prevalence in different categories of the population and for constructing the logistic regression model. It was established that HIV prevalence varies greatly with age, gender, marital status, place of residence, literacy level, belief on whether condom use can reduce the risk of contracting HIV and level of recent sexual activity whereas there was no marked variation in HIV prevalence with social status (measured using a wealth index), method of contraceptive and an individual’s level of education. PMID:26624280

  6. Estimating HIV Prevalence in Zimbabwe Using Population-Based Survey Data.

    PubMed

    Chinomona, Amos; Mwambi, Henry Godwell

    2015-01-01

    Estimates of HIV prevalence computed using data obtained from sampling a subgroup of the national population may lack the representativeness of all the relevant domains of the population. These estimates are often computed on the assumption that HIV prevalence is uniform across all domains of the population. Use of appropriate statistical methods together with population-based survey data can enhance better estimation of national and subgroup level HIV prevalence and can provide improved explanations of the variation in HIV prevalence across different domains of the population. In this study we computed design-consistent estimates of HIV prevalence, and their respective 95% confidence intervals at both the national and subgroup levels. In addition, we provided a multivariable survey logistic regression model from a generalized linear modelling perspective for explaining the variation in HIV prevalence using demographic, socio-economic, socio-cultural and behavioural factors. Essentially, this study borrows from the proximate determinants conceptual framework which provides guiding principles upon which socio-economic and socio-cultural variables affect HIV prevalence through biological behavioural factors. We utilize the 2010-11 Zimbabwe Demographic and Health Survey (2010-11 ZDHS) data (which are population based) to estimate HIV prevalence in different categories of the population and for constructing the logistic regression model. It was established that HIV prevalence varies greatly with age, gender, marital status, place of residence, literacy level, belief on whether condom use can reduce the risk of contracting HIV and level of recent sexual activity whereas there was no marked variation in HIV prevalence with social status (measured using a wealth index), method of contraceptive and an individual's level of education. PMID:26624280

  7. Threshold altitude resulting in decompression sickness

    NASA Technical Reports Server (NTRS)

    Kumar, K. V.; Waligora, James M.; Calkins, Dick S.

    1990-01-01

    A review of case reports, hypobaric chamber training data, and experimental evidence indicated that the threshold for incidence of altitude decompression sickness (DCS) was influenced by various factors such as prior denitrogenation, exercise or rest, and period of exposure, in addition to individual susceptibility. Fitting these data with appropriate statistical models makes it possible to examine the influence of various factors on the threshold for DCS. This approach was illustrated by logistic regression analysis on the incidence of DCS below 9144 m. Estimations using these regressions showed that, under a noprebreathe, 6-h exposure, simulated EVA profile, the threshold for symptoms occurred at approximately 3353 m; while under a noprebreathe, 2-h exposure profile with knee-bends exercise, the threshold occurred at 7925 m.

  8. Comparison between intensity- duration thresholds and cumulative rainfall thresholds for the forecasting of landslide

    NASA Astrophysics Data System (ADS)

    Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo

    2014-05-01

    This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.

  9. Can we infer the magma overpressure threshold before an eruption? Insights from ground deformation time series and numerical modeling of reservoir failure.

    NASA Astrophysics Data System (ADS)

    Albino, F.; Gregg, P. M.; Amelug, F.

    2015-12-01

    Overpressure within a magma chamber is a key parameter to understanding the onset of an eruption. Recent investigations indicate that surface inflation at a volcanic edifice does not always precede eruption (Chaussard and Amelung, 2012; Biggs et al., 2014), suggesting that the overpressure threshold may differ between volcanoes. To understand the failure conditions of a magma reservoir, mechanical models were developed to quantify the range of overpressure affordable in a reservoir for a given situation. Even if the choice of the failure criterion is still debated, most investigators agree that the overpressure required to fail the magma reservoir is at first order a function of the crustal stress field and the shape of the magma reservoir. Radar interferometry (InSAR) provides a large dataset of ground deformation worldwide, but many of these InSAR studies continue to use point or dislocation sources (Mogi, Okada) to explain deformation on volcanoes. Even if these simple solutions often fit the data and estimate the depth and the volume change of the source of deformation, key parameters such as the magma overpressure or the mechanical properties of the rocks cannot be derived. We use mechanical numerical models of reservoir failure combined with ground deformation data. It has been observed that volume change before an eruption can easily range one or two order of magnitude from 1-100x106 m3. The first goal of this study is to understand which parameter(s) control the critical volume changes just before the failure of the reservoir. First, a parametric study is performed to quantify the effect of the geometry of the reservoir (radius, depth), the local stress (compressive/extensive) and even the crust rheology (elastic/viscoelastic). We then compare modeling results with several active volcanoes where long time series of volume change are available: Okmok and Westdahl in Alaska, Sinabung and Agung in Indonesia and Galapagos. For each case, the maximum

  10. Genetic analysis of the twenty-one-day pregnancy rate in US Holsteins using an ordinal censored threshold model with unknown voluntary waiting period.

    PubMed

    Chang, Y M; González-Recio, O; Weigel, K A; Fricke, P M

    2007-04-01

    Genetic variation in the number of 21-d opportunity periods required to achieve pregnancy after the voluntary waiting period (VWP) had passed was examined using 44,901 lactation records of 29,422 lactating Holstein cows on 61 large commercial dairy farms in the United States. Cows were allowed a maximum of 8 opportunity periods, and the cumulative percentages of cows that became pregnant by the end of the first, second, third, fourth, and fifth opportunity periods were 19, 29, 37, 43, and 47%, respectively. In addition, 38% of records were censored because of culling or failure to achieve pregnancy after 8 opportunity periods. Mean days open was 128 d for complete records, whereas mean days to last service was 148 d for censored records. An ordinal censored threshold model was developed, in which duration of the VWP was estimated simultaneously with prediction of sire breeding values. The posterior mean of intraherd-year heritability for the number of 21-d opportunity periods required to achieve pregnancy was 0.06, with a posterior standard deviation of 0.01. Posterior means for duration of the VWP ranged from 28 to 74 d postpartum among the 116 herd-parity classes represented in the study, whereas farmer-reported survey values for duration of the VWP ranged from 30 to 78 d postpartum. Sires' predicted transmitting abilities were computed, assuming an unknown VWP (i.e., estimated from the data), a VWP fixed at 60 d postpartum, or a VWP fixed at farmer survey values. Correlations among sire predicted transmitting abilities from different models were > or = 0.98, although some reranking occurred among top sires. In summary, the proposed model for genetic evaluation of female fertility can accommodate heterogeneity in duration of the VWP between herds, as well as heterogeneity that may arise within herds owing to management practices such as intentional delay of first insemination in high-producing cows or cows with poor body condition, and it can also accommodate

  11. Thresholded Power law Size Distributions of Instabilities in Astrophysics

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2015-11-01

    Power-law-like size distributions are ubiquitous in astrophysical instabilities. There are at least four natural effects that cause deviations from ideal power law size distributions, which we model here in a generalized way: (1) a physical threshold of an instability; (2) incomplete sampling of the smallest events below a threshold x0; (3) contamination by an event-unrelated background xb; and (4) truncation effects at the largest events due to a finite system size. These effects can be modeled in the simplest terms with a “thresholded power law” distribution function (also called generalized Pareto [type II] or Lomax distribution), N(x){dx}\\propto {(x+{x}0)}-a{dx}, where x0 > 0 is positive for a threshold effect, while x0 < 0 is negative for background contamination. We analytically derive the functional shape of this thresholded power law distribution function from an exponential growth evolution model, which produces avalanches only when a disturbance exceeds a critical threshold x0. We apply the thresholded power law distribution function to terrestrial, solar (HXRBS, BATSE, RHESSI), and stellar flare (Kepler) data sets. We find that the thresholded power law model provides an adequate fit to most of the observed data. Major advantages of this model are the automated choice of the power law fitting range, diagnostics of background contamination, physical instability thresholds, instrumental detection thresholds, and finite system size limits. When testing self-organized criticality models that predict ideal power laws, we suggest including these natural truncation effects.

  12. The Effects of Social Reforms on Mental Disability in China: Population-Based Study.

    PubMed

    Wang, Zhenjie; Zhang, Lei; Li, Ning; Guo, Chao; Chen, Gong; Zheng, Xiaoying

    2016-04-01

    Few studies have explored how mental disabilities have changed with the waves of Chinese social reforms that occurred between 1912 and 2006. The present study evaluated population-based data from the Second China National Sample Survey on Disability to investigate these trends and their effects on mental disabilities. The Cox proportional hazards model was used to estimate the association between social reforms and mental disabilities. The confounding variables considered were as follows: survey age, gender, residence in 2006, ethnicity, and living arrangements in 2006. The highest risks of mental disabilities were observed in subjects born during the Mao Zedong era. Subjects who experienced social turbulence during their early development may have increased risks of mental disabilities in adulthood. The results and discussion herein contribute to our understanding of mental disabilities in China within the context of changing political, socioeconomic, and health system conditions and a developing mental health system.

  13. Waiting time disparities in breast cancer diagnosis and treatment: a population-based study in France.

    PubMed

    Molinié, F; Leux, C; Delafosse, P; Ayrault-Piault, S; Arveux, P; Woronoff, A S; Guizard, A V; Velten, M; Ganry, O; Bara, S; Daubisse-Marliac, L; Tretarre, B

    2013-10-01

    Waiting times are key indicators of a health's system performance, but are not routinely available in France. We studied waiting times for diagnosis and treatment according to patients' characteristics, tumours' characteristics and medical management options in a sample of 1494 breast cancers recorded in population-based registries. The median waiting time from the first imaging detection to the treatment initiation was 34 days. Older age, co-morbidity, smaller size of tumour, detection by organised screening, biopsy, increasing number of specimens removed, multidisciplinary consulting meetings and surgery as initial treatment were related to increased waiting times in multivariate models. Many of these factors were related to good practices guidelines. However, the strong influence of organised screening programme and the disparity of waiting times according to geographical areas were of concern. Better scheduling of diagnostic tests and treatment propositions should improve waiting times in the management of breast cancer in France.

  14. ASSOCIATON BETWEEN INTIMATE PARTNER VIOLENCE AND IRRITABLE BOWEL SYNDROME: A POPULATION-BASED STUDY IN NICARAGUA

    PubMed Central

    Becker-Dreps, Sylvia; Morgan, Douglas; Peña, Rodolfo; Cortes, Loreto; Martin, Christopher F.; Valladares, Eliette

    2010-01-01

    Irritable bowel syndrome (IBS) is a disabling functional gastrointestinal disorder, which serves as a model for abdominal pain syndromes. An association between intimate partner violence and IBS has been shown among Caucasian women in the industrialized world. To determine whether this relationship transcends cultural boundaries, we conducted a population-based, cross-sectional survey in Nicaragua, using the innovative Health and Demographic Surveillance System in the León province. Women who had experienced physical intimate partner violence had significantly increased risk of IBS (OR 2.08, 95% CI, 1.35, 3.21), as did those who had experienced sexual intimate partner violence (OR 2.85, 95% CI 1.45, 5.59). These findings argue for intimate partner violence screening among Latina women with IBS. PMID:20558772

  15. Improving the reliability of diagnostic tests in population-based agreement studies

    PubMed Central

    Nelson, Kerrie P.; Edwards, Don

    2016-01-01

    Many large-scale studies have recently been carried out to assess the reliability of diagnostic procedures, such as mammography for the detection of breast cancer. The large numbers of raters and subjects involved raise new challenges in how to measure agreement in these types of studies. An important motivator of these studies is the identification of factors that contribute to the often wide discrepancies observed between raters’ classifications, such as a rater’s experience, in order to improve the reliability of the diagnostic process of interest. Incorporating covariate information into the agreement model is a key component in addressing these questions. Few agreement models are currently available that jointly model larger numbers of raters and subjects and incorporate covariate information. In this paper, we extend a recently developed population-based model and measure of agreement for binary ratings to incorporate covariate information using the class of generalized linear mixed models with a probit link function. Important information on factors related to the subjects and raters can be included as fixed and/or random effects in the model. We demonstrate how agreement can be assessed between subgroups of the raters and/or subjects, for example, comparing agreement between experienced and less experienced raters. Simulation studies are carried out to test the performance of the proposed models and measures of agreement. Application to a large-scale breast cancer study is presented. PMID:20128018

  16. Percolation Threshold in Polycarbonate Nanocomposites

    NASA Astrophysics Data System (ADS)

    Ahuja, Suresh

    2014-03-01

    Nanocomposites have unique mechanical, electrical, magnetic, optical and thermal properties. Many methods could be applied to prepare polymer-inorganic nanocomposites, such as sol-gel processing, in-situ polymerization, particle in-situ formation, blending, and radiation synthesis. The analytical composite models that have been put forth include Voigt and Reuss bounds, Polymer nanocomposites offer the possibility of substantial improvements in material properties such as shear and bulk modulus, yield strength, toughness, film scratch resistance, optical properties, electrical conductivity, gas and solvent transport, with only very small amounts of nanoparticles Experimental results are compared against composite models of Hashin and Shtrikman bounds, Halpin-Tsai model, Cox model, and various Mori and Tanaka models. Examples of numerical modeling are molecular dynamics modeling and finite element modeling of reduced modulus and hardness that takes into account the modulus of the components and the effect of the interface between the hard filler and relatively soft polymer, polycarbonate. Higher nanoparticle concentration results in poor dispersion and adhesion to polymer matrix which results in lower modulus and hardness and departure from the existing composite models. As the level of silica increases beyond a threshold level, aggregates form which results in weakening of the structure. Polymer silica interface is found to be weak as silica is non-interacting promoting interfacial slip at silica-matrix junctions. Our experimental results compare favorably with those of nanocomposites of polyesters where the effect of nanoclay on composite hardness and modulus depended on dispersion of nanoclay in polyester.

  17. Survival of women with inflammatory breast cancer: a large population-based study†

    PubMed Central

    Dawood, S.; Lei, X.; Dent, R.; Gupta, S.; Sirohi, B.; Cortes, J.; Cristofanilli, M.; Buchholz, T.; Gonzalez-Angulo, A. M.

    2014-01-01

    Background Our group has previously reported that women with inflammatory breast cancer (IBC) continue to have worse outcome compared with those with non-IBC. We undertook this population-based study to see if there have been improvements in survival among women with stage III IBC, over time. Patient and methods We searched the Surveillance, Epidemiology and End Results Registry to identify female patients diagnosed with stage III IBC between 1990 and 2010. Patients were divided into four groups according to year of diagnosis: 1990–1995, 1996–2000, 2001–2005, and 2006–2010. Breast cancer-specific survival (BCSS) was estimated using the Kaplan–Meier method and compared across groups using the log-rank test. Cox models were then fit to determine the association of year of diagnosis and BCSS after adjusting for patient and tumor characteristics. Results A total of 7679 patients with IBC were identified of whom 1084 patients (14.1%) were diagnosed between 1990 and 1995, 1614 patients (21.0%) between 1996 and 2000, 2683 patients (34.9%) between 2001 and 2005, and 2298 patients (29.9%) between 2006 and 2010. The 2-year BCSS for the whole cohort was 71%. Two-year BCSS were 62%, 67%, 72%, and 76% for patients diagnosed between 1990–1995, 1996–2000, 2001–2005, and 2006–2010, respectively (P < 0.0001). In the multivariable analysis, increasing year of diagnosis (modeled as a continuous variable) was associated with decreasing risks of death from breast cancer (HR = 0.98, 95% confidence interval 0.97–0.99, P < 0.0001). Conclusion There has been a significant improvement in survival of patients diagnosed with IBC over a two-decade time span in this large population-based study. This suggests that therapeutic strategies researched and evolved in the context of non-IBC have also had a positive impact in women with IBC. PMID:24669011

  18. Cardiovascular Disease Mortality After Chemotherapy or Surgery for Testicular Nonseminoma: A Population-Based Study

    PubMed Central

    Fung, Chunkit; Fossa, Sophie D.; Milano, Michael T.; Sahasrabudhe, Deepak M.; Peterson, Derick R.; Travis, Lois B.

    2015-01-01

    Purpose Increased risks of incident cardiovascular disease (CVD) in patients with testicular cancer (TC) given chemotherapy in European studies were largely restricted to long-term survivors and included patients from the 1960s. Few population-based investigations have quantified CVD mortality during, shortly after, and for two decades after TC diagnosis in the era of cisplatin-based chemotherapy. Patients and Methods Standardized mortality ratios (SMRs) for CVD and absolute excess risks (AERs; number of excess deaths per 10,000 person-years) were calculated for 15,006 patients with testicular nonseminoma reported to the population-based Surveillance, Epidemiology, and End Results program (1980 to 2010) who initially received chemotherapy (n = 6,909) or surgery (n = 8,097) without radiotherapy and accrued 60,065 and 81,227 person-years of follow-up, respectively. Multivariable modeling evaluated effects of age, treatment, extent of disease, and other factors on CVD mortality. Results Significantly increased CVD mortality occurred after chemotherapy (SMR, 1.36; 95% CI, 1.03 to 1.78; n = 54) but not surgery (SMR, 0.81; 95% CI, 0.60 to 1.07; n = 50). Significant excess deaths after chemotherapy were restricted to the first year after TC diagnosis (SMR, 5.31; AER, 13.90; n = 11) and included cerebrovascular disease (SMR, 21.72; AER, 7.43; n = 5) and heart disease (SMR, 3.45; AER, 6.64; n = 6). In multivariable analyses, increased CVD mortality after chemotherapy was confined to the first year after TC diagnosis (hazard ratio, 4.86; 95% CI, 1.25 to 32.08); distant disease (P < .05) and older age at diagnosis (P < .01) were independent risk factors. Conclusion This is the first population-based study, to our knowledge, to quantify short- and long-term CVD mortality after TC diagnosis. The increased short-term risk of CVD deaths should be further explored in analytic studies that enumerate incident events and can serve to develop comprehensive evidence-based approaches

  19. Laser threshold magnetometry

    NASA Astrophysics Data System (ADS)

    Jeske, Jan; Cole, Jared H.; Greentree, Andrew D.

    2016-01-01

    We propose a new type of sensor, which uses diamond containing the optically active nitrogen-vacancy (NV-) centres as a laser medium. The magnetometer can be operated at room-temperature and generates light that can be readily fibre coupled, thereby permitting use in industrial applications and remote sensing. By combining laser pumping with a radio-frequency Rabi-drive field, an external magnetic field changes the fluorescence of the NV- centres. We use this change in fluorescence level to push the laser above threshold, turning it on with an intensity controlled by the external magnetic field, which provides a coherent amplification of the readout signal with very high contrast. This mechanism is qualitatively different from conventional NV--based magnetometers which use fluorescence measurements, based on incoherent photon emission. We term our approach laser threshold magnetometer (LTM). We predict that an NV--based LTM with a volume of 1 mm3 can achieve shot-noise limited dc sensitivity of 1.86 fT /\\sqrt{{{Hz}}} and ac sensitivity of 3.97 fT /\\sqrt{{{Hz}}}.

  20. Initiation Pressure Thresholds from Three Sources

    SciTech Connect

    Souers, P C; Vitello, P

    2007-02-28

    Pressure thresholds are minimum pressures needed to start explosive initiation that ends in detonation. We obtain pressure thresholds from three sources. Run-to-detonation times are the poorest source but the fitting of a function gives rough results. Flyer-induced initiation gives the best results because the initial conditions are the best known. However, very thick flyers are needed to give the lowest, asymptotic pressure thresholds used in modern models and this kind of data is rarely available. Gap test data is in much larger supply but the various test sizes and materials are confusing. We find that explosive pressures are almost the same if the distance in the gap test spacers are in units of donor explosive radius. Calculated half-width time pulses in the spacers may be used to create a pressure-time curve similar to that of the flyers. The very-large Eglin gap tests give asymptotic thresholds comparable to extrapolated flyer results. The three sources are assembled into a much-expanded set of near-asymptotic pressure thresholds. These thresholds vary greatly with density: for TATB/LX-17/PBX 9502, we find values of 4.9 and 8.7 GPa at 1.80 and 1.90 g/cm{sup 3}, respectively.

  1. High-Resolution Association Mapping of Quantitative Trait Loci: A Population-Based Approach

    PubMed Central

    Fan, Ruzong; Jung, Jeesun; Jin, Lei

    2006-01-01

    In this article, population-based regression models are proposed for high-resolution linkage disequilibrium mapping of quantitative trait loci (QTL). Two regression models, the “genotype effect model” and the “additive effect model,” are proposed to model the association between the markers and the trait locus. The marker can be either diallelic or multiallelic. If only one marker is used, the method is similar to a classical setting by Nielsen and Weir, and the additive effect model is equivalent to the haplotype trend regression (HTR) method by Zaykin et al. If two/multiple marker data with phase ambiguity are used in the analysis, the proposed models can be used to analyze the data directly. By analytical formulas, we show that the genotype effect model can be used to model the additive and dominance effects simultaneously; the additive effect model takes care of the additive effect only. On the basis of the two models, F-test statistics are proposed to test association between the QTL and markers. By a simulation study, we show that the two models have reasonable type I error rates for a data set of moderate sample size. The noncentrality parameter approximations of F-test statistics are derived to make power calculation and comparison. By a simulation study, it is found that the noncentrality parameter approximations of F-test statistics work very well. Using the noncentrality parameter approximations, we compare the power of the two models with that of the HTR. In addition, a simulation study is performed to make a comparison on the basis of the haplotype frequencies of 10 SNPs of angiotensin-1 converting enzyme (ACE) genes. PMID:16172503

  2. Predictors of Cerebral Palsy in Very Preterm Infants: The EPIPAGE Prospective Population-Based Cohort Study

    ERIC Educational Resources Information Center

    Beaino, Ghada; Khoshnood, Babak; Kaminski, Monique; Pierrat, Veronique; Marret, Stephane; Matis, Jacqueline; Ledesert, Bernard; Thiriez, Gerard; Fresson, Jeanne; Roze, Jean-Christophe; Zupan-Simunek, Veronique; Arnaud, Catherine; Burguet, Antoine; Larroque, Beatrice; Breart, Gerard; Ancel, Pierre-Yves

    2010-01-01

    Aim: The aim of this study was to assess the independent role of cerebral lesions on ultrasound scan, and several other neonatal and obstetric factors, as potential predictors of cerebral palsy (CP) in a large population-based cohort of very preterm infants. Method: As part of EPIPAGE, a population-based prospective cohort study, perinatal data…

  3. Heterogeneity in ALSFRS-R decline and survival: a population-based study in Italy.

    PubMed

    Mandrioli, Jessica; Biguzzi, Sara; Guidi, Carlo; Sette, Elisabetta; Terlizzi, Emilio; Ravasio, Alessandro; Casmiro, Mario; Salvi, Fabrizio; Liguori, Rocco; Rizzi, Romana; Pietrini, Vladimiro; Borghi, Annamaria; Rinaldi, Rita; Fini, Nicola; Chierici, Elisabetta; Santangelo, Mario; Granieri, Enrico; Mussuto, Vittoria; De Pasqua, Silvia; Georgoulopoulou, Eleni; Fasano, Antonio; Ferro, Salvatore; D'Alessandro, Roberto

    2015-12-01

    Very few studies examined trend over time of the revised Amyotrophic Lateral Sclerosis Functional Rating Scale (ALSFRS-R) and factors influencing it; previous studies, then, included only patients attending tertiary ALS Centres. We studied ALSFRS-R decline, factors influencing this trend and survival in a population-based setting. From 2009 onwards, a prospective registry records all incident ALS cases among residents in Emilia Romagna (population: 4.4 million). For each patient, demographic and clinical details (including ALSFRS-R) are collected by caring physicians at each follow-up. Analysis was performed on 402 incident cases (1279 ALSFRS-R assessments). The average decline of the ALSFRS-R was 0.60 points/month during the first year after diagnosis and 0.34 points/month in the second year. ALSFRS-R decline was heterogeneous among subgroups. Repeated measures mixed model showed that ALSFRS-R score decline was influenced by age at onset (p < 0.01), phenotype (p = 0.01), body mass index (BMI) (p < 0.01), progression rate at diagnosis (ΔFS) (p < 0.01), El Escorial Criteria-Revised (p < 0.01), and FVC% at diagnosis (p < 0.01). Among these factors, at multivariate analysis, only age, site of onset and ΔFS independently influenced survival. In this first population-based study on ALSFRS-R trend, we confirm that ALSFRS-R decline is not homogeneous among ALS patients and during the disease. Factors influencing ALSFRS-R decline may not match with those affecting survival. These disease modifiers should be taken into consideration for trials design and in clinical practice during discussions with patients on prognosis.

  4. Idiopathic pulmonary fibrosis: survival in population based and hospital based cohorts

    PubMed Central

    Mapel, D.; Hunt, W.; Utton, R.; Baumgartner, K.; Samet, J.; Coultas, D.

    1998-01-01

    BACKGROUND—To ascertain whether findings from hospital based clinical series can be extended to patients with idiopathic pulmonary fibrosis (IPF) in the general population, the survival of patients with IPF in a population based registry was compared with that of a cohort of patients with IPF treated at major referral hospitals and the factors influencing survival in the population based registry were identified.
METHODS—The survival of 209 patients with IPF from the New Mexico Interstitial Lung Disease Registry and a cohort of 248 patients with IPF who were participating in a multicentre case-control study was compared. The determinants of survival for the patients from the Registry were determined using life table and proportional hazard modelling methods.
RESULTS—The median survival times of patients with IPF in the Registry and case-control cohorts were similar (4.2 years and 4.1 years, respectively), although the average age at diagnosis of the Registry patients was greater (71.7 years versus 60.6 years, p < 0.01). After adjusting for differences in age, sex, and ethnicity, the death rate within six months of diagnosis was found to be greater in the Registry patients (relative hazard (RH) 6.32, 95% CI 2.19to 18.22) but more than 18 months after diagnosis the death rate was less (RH 0.35, 95% CI 0.19 to 0.66) than in the patients in the case-control study. Factors associated with poorer prognosis in the Registry included advanced age, severe radiographic abnormalities, severe reduction in forced vital capacity, and a history of corticosteroid treatment.
CONCLUSIONS—The adjusted survival of patients with IPF in the general population is different from that of hospital referrals which suggests that selection biases affect the survival experience of referral hospitals.

 PMID:9713446

  5. Hip Fracture in People with Erectile Dysfunction: A Nationwide Population-Based Cohort Study

    PubMed Central

    Wu, Chieh-Hsin; Tung, Yi-Ching; Lin, Tzu-Kang; Chai, Chee-Yin; Su, Yu-Feng; Tsai, Tai-Hsin; Tsai, Cheng-Yu; Lu, Ying-Yi; Lin, Chih-Lung

    2016-01-01

    The aims of this study were to investigate the risk of hip fracture and contributing factors in patients with erectile dysfunction(ED). This population-based study was performed using the Taiwan National Health Insurance Research Database. The analysis included4636 patients aged ≥ 40 years who had been diagnosed with ED (International Classification of Diseases, Ninth Revision, Clinical Modification codes 302.72, 607.84) during 1996–2010. The control group included 18,544 randomly selected age-matched patients without ED (1:4 ratio). The association between ED and hip fracture risk was estimated using a Cox proportional hazard regression model. During the follow-up period, 59 (1.27%) patients in the ED group and 140 (0.75%) patients in the non-ED group developed hip fracture. After adjusting for covariates, the overall incidence of hip fracture was 3.74-times higher in the ED group than in the non-ED group (2.03 vs. 0.50 per 1000 person-years, respectively). The difference in the overall incidence of hip fracture was largest during the 3-year follow-up period (hazard ratio = 7.85; 95% confidence interval = 2.94–20.96; P <0.0001). To the best of our knowledge, this nationwide population-based study is the first to investigate the relationship between ED and subsequent hip fracture in an Asian population. The results showed that ED patients had a higher risk of developing hip fracture. Patients with ED, particularly those aged 40–59 years, should undergo bone mineral density examinations as early as possible and should take measures to reduce the risk of falls. PMID:27078254

  6. Osteoporosis-related fracture case definitions for population-based administrative data

    PubMed Central

    2012-01-01

    Background Population-based administrative data have been used to study osteoporosis-related fracture risk factors and outcomes, but there has been limited research about the validity of these data for ascertaining fracture cases. The objectives of this study were to: (a) compare fracture incidence estimates from administrative data with estimates from population-based clinically-validated data, and (b) test for differences in incidence estimates from multiple administrative data case definitions. Methods Thirty-five case definitions for incident fractures of the hip, wrist, humerus, and clinical vertebrae were constructed using diagnosis codes in hospital data and diagnosis and service codes in physician billing data from Manitoba, Canada. Clinically-validated fractures were identified from the Canadian Multicentre Osteoporosis Study (CaMos). Generalized linear models were used to test for differences in incidence estimates. Results For hip fracture, sex-specific differences were observed in the magnitude of under- and over-ascertainment of administrative data case definitions when compared with CaMos data. The length of the fracture-free period to ascertain incident cases had a variable effect on over-ascertainment across fracture sites, as did the use of imaging, fixation, or repair service codes. Case definitions based on hospital data resulted in under-ascertainment of incident clinical vertebral fractures. There were no significant differences in trend estimates for wrist, humerus, and clinical vertebral case definitions. Conclusions The validity of administrative data for estimating fracture incidence depends on the site and features of the case definition. PMID:22537071

  7. Organization of population-based cancer control programs: Europe and the world.

    PubMed

    Otter, Renée; Qiao, You-Lin; Burton, Robert; Samiei, Massoud; Parkin, Max; Trapido, Edward; Weller, David; Magrath, Ian; Sutcliffe, Simon

    2009-01-01

    As cancer is to a large extent avoidable and treatable, a cancer control program should be able to reduce mortality and morbidity and improve the quality of life of cancer patients and their families. However, the extent to which the goals of a cancer control program can be achieved will depend on the resource constraints a country faces. Such population-based cancer control plans should prioritize effective interventions and programs that are beneficial to the largest part of the population, and should include activities devoted to prevention, screening and early detection, treatment, palliation and end-of-life care, and rehabilitation. In order to develop a successful cancer control program, leadership and the relevant stakeholders, including patient organizations, need to be identified early on in the process so that all partners can take ownership and responsibility for the program. Various tools have been developed to aid them in the planning and implementation process. However, countries developing a national cancer control program would benefit from a discussion of different models for planning and delivery of population-based cancer control in settings with differing levels of resource commitment, in order to determine how best to proceed given their current level of commitment, political engagement and resources. As the priority assigned to different components of cancer control will differ depending on available resources and the burden and pattern of cancer, it is important to consider the relative roles of prevention, early detection, diagnosis, treatment, rehabilitation and palliative care in a cancer control program, as well as how to align available resources to meet prioritized needs. Experiences from countries with differing levels of resources are presented and serve to illustrate the difficulties in developing and implementing cancer control programs, as well as the innovative strategies that are being used to maximize available resources and

  8. A Nationwide Population-Based Cohort Study of Migraine and Organic-Psychogenic Erectile Dysfunction.

    PubMed

    Wu, Szu-Hsien; Chuang, Eric; Chuang, Tien-Yow; Lin, Cheng-Li; Lin, Ming-Chia; Yen, Der-Jen; Kao, Chia-Hung

    2016-03-01

    As chronic illnesses and chronic pain are related to erectile dysfunction (ED), migraine as a prevalent chronic disorder affecting lots of people all over the world may negatively affect quality of life as well as sexual function. However, a large-scale population-based study of erectile dysfunction and other different comorbidities in patients with migraine is quite limited. This cohort longitudinal study aimed to estimate the association between migraine and ED using a nationwide population-based database in Taiwan.The data used for this cohort study were retrieved from the Longitudinal Health Insurance Database 2000 in Taiwan. We identified 5015 patients with migraine and frequency matched 20,060 controls without migraine from 2000 to 2011. The occurrence of ED was followed up until the end of 2011. We used Cox proportional hazard regression models to analyze the risks of ED.The overall incidence of ED was 1.78-fold greater in the migraine cohort than in the comparison cohort (23.3 vs 10.5 per 10,000 person-years; 95% confidence interval [CI] = 1.31-2.41). Furthermore, patients with migraine were 1.75-fold more likely to develop organic ED (95% CI = 1.27-2.41) than were the comparison cohort. The migraine patients with anxiety had a 3.6-fold higher HR of having been diagnosed with ED than the comparison cohort without anxiety (95% CI, 2.10-6.18).The results support that patients with migraine have a higher incidence of being diagnosed with ED, particularly in the patient with the comorbidity of anxiety.

  9. Exploratory factor analysis of self-reported symptoms in a large, population-based military cohort

    PubMed Central

    2010-01-01

    Background US military engagements have consistently raised concern over the array of health outcomes experienced by service members postdeployment. Exploratory factor analysis has been used in studies of 1991 Gulf War-related illnesses, and may increase understanding of symptoms and health outcomes associated with current military conflicts in Iraq and Afghanistan. The objective of this study was to use exploratory factor analysis to describe the correlations among numerous physical and psychological symptoms in terms of a smaller number of unobserved variables or factors. Methods The Millennium Cohort Study collects extensive self-reported health data from a large, population-based military cohort, providing a unique opportunity to investigate the interrelationships of numerous physical and psychological symptoms among US military personnel. This study used data from the Millennium Cohort Study, a large, population-based military cohort. Exploratory factor analysis was used to examine the covariance structure of symptoms reported by approximately 50,000 cohort members during 2004-2006. Analyses incorporated 89 symptoms, including responses to several validated instruments embedded in the questionnaire. Techniques accommodated the categorical and sometimes incomplete nature of the survey data. Results A 14-factor model accounted for 60 percent of the total variance in symptoms data and included factors related to several physical, psychological, and behavioral constructs. A notable finding was that many factors appeared to load in accordance with symptom co-location within the survey instrument, highlighting the difficulty in disassociating the effects of question content, location, and response format on factor structure. Conclusions This study demonstrates the potential strengths and weaknesses of exploratory factor analysis to heighten understanding of the complex associations among symptoms. Further research is needed to investigate the relationship between

  10. A Nationwide Population-Based Cohort Study of Migraine and Organic-Psychogenic Erectile Dysfunction.

    PubMed

    Wu, Szu-Hsien; Chuang, Eric; Chuang, Tien-Yow; Lin, Cheng-Li; Lin, Ming-Chia; Yen, Der-Jen; Kao, Chia-Hung

    2016-03-01

    As chronic illnesses and chronic pain are related to erectile dysfunction (ED), migraine as a prevalent chronic disorder affecting lots of people all over the world may negatively affect quality of life as well as sexual function. However, a large-scale population-based study of erectile dysfunction and other different comorbidities in patients with migraine is quite limited. This cohort longitudinal study aimed to estimate the association between migraine and ED using a nationwide population-based database in Taiwan.The data used for this cohort study were retrieved from the Longitudinal Health Insurance Database 2000 in Taiwan. We identified 5015 patients with migraine and frequency matched 20,060 controls without migraine from 2000 to 2011. The occurrence of ED was followed up until the end of 2011. We used Cox proportional hazard regression models to analyze the risks of ED.The overall incidence of ED was 1.78-fold greater in the migraine cohort than in the comparison cohort (23.3 vs 10.5 per 10,000 person-years; 95% confidence interval [CI] = 1.31-2.41). Furthermore, patients with migraine were 1.75-fold more likely to develop organic ED (95% CI = 1.27-2.41) than were the comparison cohort. The migraine patients with anxiety had a 3.6-fold higher HR of having been diagnosed with ED than the comparison cohort without anxiety (95% CI, 2.10-6.18).The results support that patients with migraine have a higher incidence of being diagnosed with ED, particularly in the patient with the comorbidity of anxiety. PMID:26962838

  11. A Nationwide Population-Based Cohort Study of Migraine and Organic-Psychogenic Erectile Dysfunction

    PubMed Central

    Wu, Szu-Hsien; Chuang, Eric; Chuang, Tien-Yow; Lin, Cheng-Li; Lin, Ming-Chia; Yen, Der-Jen; Kao, Chia-Hung

    2016-01-01

    Abstract As chronic illnesses and chronic pain are related to erectile dysfunction (ED), migraine as a prevalent chronic disorder affecting lots of people all over the world may negatively affect quality of life as well as sexual function. However, a large-scale population-based study of erectile dysfunction and other different comorbidities in patients with migraine is quite limited. This cohort longitudinal study aimed to estimate the association between migraine and ED using a nationwide population-based database in Taiwan. The data used for this cohort study were retrieved from the Longitudinal Health Insurance Database 2000 in Taiwan. We identified 5015 patients with migraine and frequency matched 20,060 controls without migraine from 2000 to 2011. The occurrence of ED was followed up until the end of 2011. We used Cox proportional hazard regression models to analyze the risks of ED. The overall incidence of ED was 1.78-fold greater in the migraine cohort than in the comparison cohort (23.3 vs 10.5 per 10,000 person-years; 95% confidence interval [CI] = 1.31–2.41). Furthermore, patients with migraine were 1.75-fold more likely to develop organic ED (95% CI = 1.27–2.41) than were the comparison cohort. The migraine patients with anxiety had a 3.6-fold higher HR of having been diagnosed with ED than the comparison cohort without anxiety (95% CI, 2.10–6.18). The results support that patients with migraine have a higher incidence of being diagnosed with ED, particularly in the patient with the comorbidity of anxiety. PMID:26962838

  12. Viral population dynamics and virulence thresholds.

    PubMed

    Lancaster, Karen Z; Pfeiffer, Julie K

    2012-08-01

    Viral factors and host barriers influence virally induced disease, and asymptomatic versus symptomatic infection is governed by a 'virulence threshold'. Understanding modulation of virulence thresholds could lend insight into disease outcome and aid in rational therapeutic and vaccine design. RNA viruses are an excellent system to study virulence thresholds in the context of quasispecies population dynamics. RNA viruses have high error frequencies and our understanding of viral population dynamics has been shaped by quasispecies evolutionary theory. In turn, research using RNA viruses as replicons with short generation times and high mutation rates has been an invaluable tool to test models of quasispecies theory. The challenge and new frontier of RNA virus population dynamics research is to combine multiple theoretical models and experimental data to describe viral population behavior as it changes, moving within and between hosts, to predict disease and pathogen emergence. Several excellent studies have begun to undertake this challenge using novel approaches.

  13. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    PubMed

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  14. A two-dimensional analytical modeling for channel potential and threshold voltage of short channel triple material symmetrical gate Stack (TMGS) DG-MOSFET

    NASA Astrophysics Data System (ADS)

    Tripathi, Shweta

    2016-10-01

    In the present work, a two-dimensional (2D) analytical framework of triple material symmetrical gate stack (TMGS) DG-MOSFET is presented in order to subdue the short channel effects. A lightly doped channel along with triple material gate having different work functions and symmetrical gate stack structure, showcases substantial betterment in quashing short channel effects to a good extent. The device functioning amends in terms of improved exemption to threshold voltage roll-off, thereby suppressing the short channel effects. The encroachments of respective device arguments on the threshold voltage of the proposed structure are examined in detail. The significant outcomes are compared with the numerical simulation data obtained by using 2D ATLAS™ device simulator to affirm and formalize the proposed device structure.

  15. Threshold concepts in finance: conceptualizing the curriculum

    NASA Astrophysics Data System (ADS)

    Hoadley, Susan; Tickle, Leonie; Wood, Leigh N.; Kyng, Tim

    2015-08-01

    Graduates with well-developed capabilities in finance are invaluable to our society and in increasing demand. Universities face the challenge of designing finance programmes to develop these capabilities and the essential knowledge that underpins them. Our research responds to this challenge by identifying threshold concepts that are central to the mastery of finance and by exploring their potential for informing curriculum design and pedagogical practices to improve student outcomes. In this paper, we report the results of an online survey of finance academics at multiple institutions in Australia, Canada, New Zealand, South Africa and the United Kingdom. The outcomes of our research are recommendations for threshold concepts in finance endorsed by quantitative evidence, as well as a model of the finance curriculum incorporating finance, modelling and statistics threshold concepts. In addition, we draw conclusions about the application of threshold concept theory supported by both quantitative and qualitative evidence. Our methodology and findings have general relevance to the application of threshold concept theory as a means to investigate and inform curriculum design and delivery in higher education.

  16. Predictability of threshold exceedances in dynamical systems

    NASA Astrophysics Data System (ADS)

    Bódai, Tamás

    2015-12-01

    In a low-order model of the general circulation of the atmosphere we examine the predictability of threshold exceedance events of certain observables. The likelihood of such binary events-the cornerstone also for the categoric (as opposed to probabilistic) prediction of threshold exceedances-is established from long time series of one or more observables of the same system. The prediction skill is measured by a summary index of the ROC curve that relates the hit- and false alarm rates. Our results for the examined systems suggest that exceedances of higher thresholds are more predictable; or in other words: rare large magnitude, i.e., extreme, events are more predictable than frequent typical events. We find this to hold provided that the bin size for binning time series data is optimized, but not necessarily otherwise. This can be viewed as a confirmation of a counterintuitive (and seemingly contrafactual) statement that was previously formulated for more simple autoregressive stochastic processes. However, we argue that for dynamical systems in general it may be typical only, but not universally true. We argue that when there is a sufficient amount of data depending on the precision of observation, the skill of a class of data-driven categoric predictions of threshold exceedances approximates the skill of the analogous model-driven prediction, assuming strictly no model errors. Therefore, stronger extremes in terms of higher threshold levels are more predictable both in case of data- and model-driven prediction. Furthermore, we show that a quantity commonly regarded as a measure of predictability, the finite-time maximal Lyapunov exponent, does not correspond directly to the ROC-based measure of prediction skill when they are viewed as functions of the prediction lead time and the threshold level. This points to the fact that even if the Lyapunov exponent as an intrinsic property of the system, measuring the instability of trajectories, determines predictability

  17. Absolute Cerebral Blood Flow Infarction Threshold for 3-Hour Ischemia Time Determined with CT Perfusion and 18F-FFMZ-PET Imaging in a Porcine Model of Cerebral Ischemia

    PubMed Central

    Cockburn, Neil; Kovacs, Michael

    2016-01-01

    CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia time by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155–180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877

  18. Absolute Cerebral Blood Flow Infarction Threshold for 3-Hour Ischemia Time Determined with CT Perfusion and 18F-FFMZ-PET Imaging in a Porcine Model of Cerebral Ischemia.

    PubMed

    Wright, Eric A; d'Esterre, Christopher D; Morrison, Laura B; Cockburn, Neil; Kovacs, Michael; Lee, Ting-Yim

    2016-01-01

    CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia time by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155-180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877

  19. Epidemic thresholds for bipartite networks

    NASA Astrophysics Data System (ADS)

    Hernández, D. G.; Risau-Gusman, S.

    2013-11-01

    It is well known that sexually transmitted diseases (STD) spread across a network of human sexual contacts. This network is most often bipartite, as most STD are transmitted between men and women. Even though network models in epidemiology have quite a long history now, there are few general results about bipartite networks. One of them is the simple dependence, predicted using the mean field approximation, between the epidemic threshold and the average and variance of the degree distribution of the network. Here we show that going beyond this approximation can lead to qualitatively different results that are supported by numerical simulations. One of the new features, that can be relevant for applications, is the existence of a critical value for the infectivity of each population, below which no epidemics can arise, regardless of the value of the infectivity of the other population.

  20. Predictors of Colorectal Cancer Survival in Golestan, Iran: A Population-based Study

    PubMed Central

    Aryaie, Mohammad; Roshandel, Gholamreza; Semnani, Shahryar; Asadi-Lari, Mohsen; Aarabi, Mohsen; Vakili, Mohammad Ali; Kazemnejhad, Vahideh; Sedaghat, Seyed Mehdi

    2013-01-01

    OBJECTIVES We aimed to investigate factors associated with colorectal cancer survival in Golestan, Iran. METHODS We used a population based cancer registry to recruit study subjects. All patients registered since 2004 were contacted and data were collected using structured questionnaires and trained interviewers. All the existing evidences to determine the stage of the cancer were also collected. The time from first diagnosis to death was compared in patients according to their stage of cancer using the Kaplan-Meir method. A Cox proportional hazard model was built to examine their survival experience by taking into account other covariates. RESULTS Out of a total of 345 subjects, 227 were traced. Median age of the subjects was 54 and more than 42% were under 50 years old. We found 132 deaths among these patients, 5 of which were non-colorectal related deaths. The median survival time for the entire cohort was 3.56 years. A borderline significant difference in survival experience was detected for ethnicity (log rank test, p=0.053). Using Cox proportional hazard modeling, only cancer stage remained significantly associated with time of death in the final model. CONCLUSIONS Colorectal cancer occurs at a younger age among people living in Golestan province. A very young age at presentation and what appears to be a high proportion of patients presenting with late stage in this area suggest this population might benefit substantially from early diagnoses by introducing age adapted screening programs. PMID:23807907

  1. A new method for the analysis of the dynamics of the molecular genetic control systems. II. Application of the method of generalized threshold models in the investigation of concrete genetic systems.

    PubMed

    Prokudina, E I; Valeev RYu; Tchuraev, R N

    1991-07-01

    Mathematical models of the prokaryotic control systems of tryptophan biosynthesis (both normal and with cloned blocks) and arabinose catabolism have been built using the method of generalized threshold models. Kinetic curves for molecular components (mRNAs, proteins, metabolites) of the systems considered are obtained. It has been shown that the method of generalized threshold models gives a more detailed qualitative picture of the dynamics of the molecular genetic control systems in comparison with the heuristic method of threshold models. The qualitative analysis of the functioning of the following mechanisms of control of the tryptophan biosynthesis: (1) inhibition of the activity of anthranilate synthetase by tryptophan, (2) repression and (3) attenuation of transcription of the tryptophan operon on the basis of the mathematical model of the control system of the tryptophan biosynthesis demonstrates that feedback inhibition is the most operative of the considered mechanisms while repression allows the bacterium to economize intracellular resources. As regards the control system of the arabinose catabolism the results of modelling enable us to state the following. The induction by arabinose within a wide range of parameter values causes two subsystems (araBAD and transport operons) of the arabinose regulon with a low rate of arabinose utilization to pass into a stationary regime and one subsystem (araC operon) to pass into a stable periodical regime. A study of the system characterized by the effective utilization of arabinose has shown that under induction by arabinose stable oscillations with small amplitudes of the concentration of regulatory protein and oscillations with large amplitudes of the concentrations of arabinose-isomerase and transport protein may occur. The period of the oscillation depends on the mean lifetime of the "activator-DNA" complex and on the rate constant of arabinoseisomerase degradation.

  2. Probabilistic Threshold Criterion

    SciTech Connect

    Gresshoff, M; Hrousis, C A

    2010-03-09

    The Probabilistic Shock Threshold Criterion (PSTC) Project at LLNL develops phenomenological criteria for estimating safety or performance margin on high explosive (HE) initiation in the shock initiation regime, creating tools for safety assessment and design of initiation systems and HE trains in general. Until recently, there has been little foundation for probabilistic assessment of HE initiation scenarios. This work attempts to use probabilistic information that is available from both historic and ongoing tests to develop a basis for such assessment. Current PSTC approaches start with the functional form of the James Initiation Criterion as a backbone, and generalize to include varying areas of initiation and provide a probabilistic response based on test data for 1.8 g/cc (Ultrafine) 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) and LX-17 (92.5% TATB, 7.5% Kel-F 800 binder). Application of the PSTC methodology is presented investigating the safety and performance of a flying plate detonator and the margin of an Ultrafine TATB booster initiating LX-17.

  3. Threshold phenomena in soft matter

    NASA Astrophysics Data System (ADS)

    Huang, Zhibin

    Although two different fields are covered, this thesis is mainly focused on some threshold behaviors in both liquid crystal field and fluid dynamic systems. A method of rubbed polyimide is used to obtain pretilt. Sufficiently strong rubbing of a polyimide (SE-1211) results in a large polar pretilt of liquid crystal director with respect to the homeotropic orientation. There exists a threshold rubbing strength required to induce nonzero pretilt. For the homologous liquid crystal series alkyl-cyanobyphenyl, we found that the threshold rubbing strength is a monotonic function of the number of methylene units. A dual easy axis model is then used to explain the results. Freedericksz transition measurements have been used to determine the quadratical and quartic coefficients associated with the molecules' tilt with respect to the layer normal in surface-induced smectic layers in the nematic phase above the smectic-A-nematic phase transition temperature. Both the quadratic and quartic coefficients are consistent with the scaling relationship as predicted in theory, and their ratio is approximately constant. A Rayleigh-Taylor instability experiment is performed by using a magnetic field gradient to draw down a low density but highly paramagnetic fluid below a more dense fluid in a Hele-Shaw cell. When turning off the magnetic field, the RT instability occurs in situ and the growth of the most unstable wavevector is measured as a function of time. The wavelength of the RT instability along with the growth rate was measured as a function of capillary number (which is related to the density difference and interfacial tension between two fluids). A theory for the instability that permits different viscosities for two immiscible fluids was developed, and good agreement was found with the experimental results. The technique of magnetic levitation promises to broaden significantly the accessible parameter space of gravitational interfacial instability experiments. A method is

  4. Passive-aggressive (negativistic) personality disorder: a population-based twin study.

    PubMed

    Czajkowski, Nikolai; Kendler, Kenneth S; Jacobson, Kristen C; Tambs, Kristian; Røysamb, Espen; Reichborn-Kjennerud, Ted

    2008-02-01

    The objective of this study was to investigate the familial aggregation of passive aggressive personality disorder (PAPD), and explore issues regarding PAPD raised by the DSM-IV Personality Disorder Work Group. Two thousand seven hundred and ninety-four Norwegian twins from the population-based Norwegian Institute of Public Health Twin Panel were interviewed with the Structured Interview for DSM-IV Personality (SIDP-IV). Because of the rarity of the twins meeting full diagnostic criteria for PAPD a dimensional representation of the disorder was used for the analyses. Overlap with other axis II disorders was assessed by polychoric correlations, while familial aggregation was explored by structural equation twin models. Overlap was highest with paranoid (r = 0.52) and borderline personality disorder (r = 0.53), and lowest with schizoid (r = 0.26). Significant familial aggregation was found for PAPD. The twin correlations and parameter estimates in the full model indicated genetic and shared environmental effects for females, and only shared environmental effects for males, but the prevalence of endorsed PAPD criteria in this community sample was too low to permit us to conclude with confidence regarding the relative influence of genetic and shared environmental factors on the familial aggregation of PAPD.

  5. Genotyping hepatitis B virus dual infections using population-based sequence data.

    PubMed

    Beggel, Bastian; Neumann-Fraune, Maria; Döring, Matthias; Lawyer, Glenn; Kaiser, Rolf; Verheyen, Jens; Lengauer, Thomas

    2012-09-01

    The hepatitis B virus (HBV) is classified into distinct genotypes A-H that are characterized by different progression of hepatitis B and sensitivity to interferon treatment. Previous computational genotyping methods are not robust enough regarding HBV dual infections with different genotypes. The correct classification of HBV sequences into the present genotypes is impaired due to multiple ambiguous sequence positions. We present a computational model that is able to identify and genotype inter- and intragenotype dual infections using population-based sequencing data. Model verification on synthetic data showed 100 % accuracy for intergenotype dual infections and 36.4 % sensitivity in intragenotype dual infections. Screening patient sera (n = 241) revealed eight putative cases of intergenotype dual infection (one A-D, six A-G and one D-G) and four putative cases of intragenotype dual infection (one A-A, two D-D and one E-E). Clonal experiments from the original patient material confirmed three out of three of our predictions. The method has been integrated into geno2pheno([hbv]), an established web-service in clinical use for analysing HBV sequence data. It offers exact and detailed identification of HBV genotypes in patients with dual infections that helps to optimize antiviral therapy regimens. geno2pheno([hbv]) is available under http://www.genafor.org/g2p_hbv/index.php.

  6. Immigrants’ duration of residence and adverse birth outcomes: a population-based study

    PubMed Central

    Urquia, ML; Frank, JW; Moineddin, R; Glazier, RH

    2010-01-01

    Please cite this paper as: Urquia M, Frank J, Moineddin R, Glazier R. Immigrants’ duration of residence and adverse birth outcomes: a population-based study. BJOG 2010;117:591–601. Objective This study aimed to examine preterm and small-for-gestational-age (SGA) births among immigrants, by duration of residence, and to compare them with the Canadian-born population. Design Population-based cross-sectional study with retrospective assessment of immigration. Setting Metropolitan areas of Ontario, Canada. Population A total of 83 233 singleton newborns born to immigrant mothers and 314 237 newborns born to non-immigrant mothers. Methods We linked a database of immigrants acquiring permanent residence in Ontario, Canada, in the period 1985–2000 with mother–infant hospital records (2002–2007). Duration of residence was measured as completed years from arrival to Canada to delivery/birth. Logistic regression models were used to estimate the effects of duration of residence with adjusted odds ratios and 95% confidence intervals. In analyses restricted to immigrants only, hierarchical models were used to account for the clustering of births into maternal countries of birth. Main outcome measures Preterm birth (PTB) and SGA birth. Results Recent immigrants (<5 years) had a lower risk of PTB (4.7%) than non-immigrants (6.2%), but those with ≥15 years of stay were at higher risk (7.4%). Among immigrants, a 5-year increase in Canadian residence was associated with an increase in PTB (AOR 1.14, 95% CI 1.10–1.19), but not in SGA birth (AOR 0.99, 95% CI 0.96–1.02). Conclusions Time since migration was associated with increases in the risk of PTB, but was not associated with an increase in SGA births. Ignoring duration of residence may mask important disparities in preterm delivery between immigrants and non-immigrants, and between immigrant subgroups categorised by their duration of residence. PMID:20374596

  7. Arsenic Exposure and Impaired Lung Function. Findings from a Large Population-based Prospective Cohort Study

    PubMed Central

    Parvez, Faruque; Chen, Yu; Yunus, Mahbub; Olopade, Christopher; Segers, Stephanie; Slavkovich, Vesna; Argos, Maria; Hasan, Rabiul; Ahmed, Alauddin; Islam, Tariqul; Akter, Mahmud M.; Graziano, Joseph H.

    2013-01-01

    Rationale: Exposure to arsenic through drinking water has been linked to respiratory symptoms, obstructive lung diseases, and mortality from respiratory diseases. Limited evidence for the deleterious effects on lung function exists among individuals exposed to a high dose of arsenic. Objectives: To determine the deleterious effects on lung function that exist among individuals exposed to a high dose of arsenic. Methods: In 950 individuals who presented with any respiratory symptom among a population-based cohort of 20,033 adults, we evaluated the association between arsenic exposure, measured by well water and urinary arsenic concentrations measured at baseline, and post-bronchodilator–administered pulmonary function assessed during follow-up. Measurements and Main Results: For every one SD increase in baseline water arsenic exposure, we observed a lower level of FEV1 (−46.5 ml; P < 0.0005) and FVC (−53.1 ml; P < 0.01) in regression models adjusted for age, sex, body mass index, smoking, socioeconomic status, betel nut use, and arsenical skin lesions status. Similar inverse relationships were observed between baseline urinary arsenic and FEV1 (−48.3 ml; P < 0.005) and FVC (−55.2 ml; P < 0.01) in adjusted models. Our analyses also demonstrated a dose-related decrease in lung function with increasing levels of baseline water and urinary arsenic. This association remained significant in never-smokers and individuals without skin lesions, and was stronger in male smokers. Among male smokers and individuals with skin lesions, every one SD increase in water arsenic was related to a significant reduction of FEV1 (−74.4 ml, P < 0.01; and −116.1 ml, P < 0.05) and FVC (−72.8 ml, P = 0.02; and −146.9 ml, P = 0.004), respectively. Conclusions: This large population-based study confirms that arsenic exposure is associated with impaired lung function and the deleterious effect is evident at low- to moderate-dose range. PMID:23848239

  8. Excitable neurons, firing threshold manifolds and canards.

    PubMed

    Mitry, John; McCarthy, Michelle; Kopell, Nancy; Wechselberger, Martin

    2013-01-01

    We investigate firing threshold manifolds in a mathematical model of an excitable neuron. The model analyzed investigates the phenomenon of post-inhibitory rebound spiking due to propofol anesthesia and is adapted from McCarthy et al. (SIAM J. Appl. Dyn. Syst. 11(4):1674-1697, 2012). Propofol modulates the decay time-scale of an inhibitory GABAa synaptic current. Interestingly, this system gives rise to rebound spiking within a specific range of propofol doses. Using techniques from geometric singular perturbation theory, we identify geometric structures, known as canards of folded saddle-type, which form the firing threshold manifolds. We find that the position and orientation of the canard separatrix is propofol dependent. Thus, the speeds of relevant slow synaptic processes are encoded within this geometric structure. We show that this behavior cannot be understood using a static, inhibitory current step protocol, which can provide a single threshold for rebound spiking but cannot explain the observed cessation of spiking for higher propofol doses. We then compare the analyses of dynamic and static synaptic inhibition, showing how the firing threshold manifolds of each relate, and why a current step approach is unable to fully capture the behavior of this model. PMID:23945278

  9. Health Literacy in Taiwan: A Population-Based Study.

    PubMed

    Duong, Van Tuyen; Lin, I-Feng; Sorensen, Kristine; Pelikan, Jürgen M; Van Den Broucke, Stephan; Lin, Ying-Chin; Chang, Peter Wushou

    2015-11-01

    Data on health literacy (HL) in the population is limited for Asian countries. This study aimed to test the validity of the Mandarin version of the European Health Literacy Survey Questionnaire (HLS-EU-Q) for use in the general public in Taiwan. Multistage stratification random sampling resulted in a sample of 2989 people aged 15 years and above. The HLS-EU-Q was validated by confirmatory factor analysis with excellent model data fit indices. The general HL of the Taiwanese population was 34.4 ± 6.6 on a scale of 50. Multivariate regression analysis showed that higher general HL is significantly associated with the higher ability to pay for medication, higher self-perceived social status, higher frequency of watching health-related TV, and community involvement but associated with younger age. HL is also associated with health status, health behaviors, and health care accessibility and use. The HLS-EU-Q was found to be a useful tool to assess HL and its associated factors in the general population. PMID:26419635

  10. Human immunodeficiency virus testing for patient-based and population-based diagnosis.

    PubMed

    Albritton, W L; Vittinghoff, E; Padian, N S

    1996-10-01

    Laboratory testing for human immunodeficiency virus (HIV) has been introduced for individual patient-based diagnosis as well as high-risk and low-risk population-based screening. The choice of test, confirmatory algorithm, and interpretative criteria used depend on the clinical setting. In the context of general population-based testing, factors affecting test performance will have to be considered carefully in the development of testing policy. PMID:8843247

  11. Life below the threshold.

    PubMed

    Castro, C

    1991-01-01

    This article explains that malnutrition, poor health, and limited educational opportunities plague Philippine children -- especially female children -- from families living below the poverty threshold. Nearly 70% of households in the Philippines do not meet the required daily level of nutritional intake. Because it is often -- and incorrectly -- assumed that women's nutritional requirements are lower than men's, women suffer higher rates of malnutrition and poor health. A 1987 study revealed that 11.7% of all elementary students were underweight and 13.9% had stunted growths. Among elementary-school girls, 17% were malnourished and 40% suffered from anemia (among lactating mothers, more than 1/2 are anemic). A 1988 Program for Decentralized Educational Development study showed that grade VI students learn only about 1/2 of what they are supposed to learn. 30% of the children enrolled in grade school drop out before they reach their senior year. The Department of Education, Culture and Sports estimates that some 2.56 million students dropped out of school in l989. That same year, some 3.7 million children were counted as part of the labor force. In Manila alone, some 60,000 children work the streets, whether doing odd jobs or begging, or turning to crime or prostitution. the article tells the story of a 12 year-old girl named Ging, a 4th grader at a public school and the oldest child in a poor family of 6 children. The undernourished Ging dreams of a good future for her family and sees education as a way out of poverty; unfortunately, her time after school is spend working in the streets or looking after her family. She considers herself luckier than many of the other children working in the streets, since she at least has a family.

  12. The Nature of Psychological Thresholds

    ERIC Educational Resources Information Center

    Rouder, Jeffrey N.; Morey, Richard D.

    2009-01-01

    Following G. T. Fechner (1966), thresholds have been conceptualized as the amount of intensity needed to transition between mental states, such as between a states of unconsciousness and consciousness. With the advent of the theory of signal detection, however, discrete-state theory and the corresponding notion of threshold have been discounted.…

  13. Threshold Concepts and Information Literacy

    ERIC Educational Resources Information Center

    Townsend, Lori; Brunetti, Korey; Hofer, Amy R.

    2011-01-01

    What do we teach when we teach information literacy in higher education? This paper describes a pedagogical approach to information literacy that helps instructors focus content around transformative learning thresholds. The threshold concept framework holds promise for librarians because it grounds the instructor in the big ideas and underlying…

  14. Threshold Hypothesis: Fact or Artifact?

    ERIC Educational Resources Information Center

    Karwowski, Maciej; Gralewski, Jacek

    2013-01-01

    The threshold hypothesis (TH) assumes the existence of complex relations between creative abilities and intelligence: linear associations below 120 points of IQ and weaker or lack of associations above the threshold. However, diverse results have been obtained over the last six decades--some confirmed the hypothesis and some rejected it. In this…

  15. Outcome-Driven Thresholds for Home Blood Pressure Measurement

    PubMed Central

    Niiranen, Teemu J.; Asayama, Kei; Thijs, Lutgarde; Johansson, Jouni K.; Ohkubo, Takayoshi; Kikuya, Masahiro; Boggia, José; Hozawa, Atsushi; Sandoya, Edgardo; Stergiou, George S.; Tsuji, Ichiro; Jula, Antti M.; Imai, Yutaka; Staessen, Jan A.

    2013-01-01

    The lack of outcome-driven operational thresholds limits the clinical application of home blood pressure (BP) measurement. Our objective was to determine an outcome-driven reference frame for home BP measurement. We measured home and clinic BP in 6470 participants (mean age, 59.3 years; 56.9% women; 22.4% on antihypertensive treatment) recruited in Ohasama, Japan (n=2520); Montevideo, Uruguay (n=399); Tsurugaya, Japan (n=811); Didima, Greece (n=665); and nationwide in Finland (n=2075). In multivariable-adjusted analyses of individual subject data, we determined home BP thresholds, which yielded 10-year cardiovascular risks similar to those associated with stages 1 (120/80 mm Hg) and 2 (130/85 mm Hg) prehypertension, and stages 1 (140/90 mm Hg) and 2 (160/100 mm Hg) hypertension on clinic measurement. During 8.3 years of follow-up (median), 716 cardiovascular end points, 294 cardiovascular deaths, 393 strokes, and 336 cardiac events occurred in the whole cohort; in untreated participants these numbers were 414, 158, 225, and 194, respectively. In the whole cohort, outcome-driven systolic/diastolic thresholds for the home BP corresponding with stages 1 and 2 prehypertension and stages 1 and 2 hypertension were 121.4/77.7, 127.4/79.9, 133.4/82.2, and 145.4/86.8 mm Hg; in 5018 untreated participants, these thresholds were 118.5/76.9, 125.2/79.7, 131.9/82.4, and 145.3/87.9 mm Hg, respectively. Rounded thresholds for stages 1 and 2 prehypertension and stages 1 and 2 hypertension amounted to 120/75, 125/80, 130/85, and 145/90 mm Hg, respectively. Population-based outcome-driven thresholds for home BP are slightly lower than those currently proposed in hypertension guidelines. Our current findings could inform guidelines and help clinicians in diagnosing and managing patients. PMID:23129700

  16. Assessing the Validity of a Stage Measure on Physical Activity in a Population-Based Sample of Individuals with Type 1 or Type 2 Diabetes

    ERIC Educational Resources Information Center

    Plotnikoff, Ronald C.; Lippke, Sonia; Reinbold-Matthews, Melissa; Courneya, Kerry S.; Karunamuni, Nandini; Sigal, Ronald J.; Birkett, Nicholas

    2007-01-01

    This study was designed to test the validity of a transtheoretical model's physical activity (PA) stage measure with intention and different intensities of behavior in a large population-based sample of adults living with diabetes (Type 1 diabetes, n = 697; Type 2 diabetes, n = 1,614) and examine different age groups. The overall "specificity"…

  17. Thresholds for Cenozoic bipolar glaciation.

    PubMed

    Deconto, Robert M; Pollard, David; Wilson, Paul A; Pälike, Heiko; Lear, Caroline H; Pagani, Mark

    2008-10-01

    The long-standing view of Earth's Cenozoic glacial history calls for the first continental-scale glaciation of Antarctica in the earliest Oligocene epoch ( approximately 33.6 million years ago), followed by the onset of northern-hemispheric glacial cycles in the late Pliocene epoch, about 31 million years later. The pivotal early Oligocene event is characterized by a rapid shift of 1.5 parts per thousand in deep-sea benthic oxygen-isotope values (Oi-1) within a few hundred thousand years, reflecting a combination of terrestrial ice growth and deep-sea cooling. The apparent absence of contemporaneous cooling in deep-sea Mg/Ca records, however, has been argued to reflect the growth of more ice than can be accommodated on Antarctica; this, combined with new evidence of continental cooling and ice-rafted debris in the Northern Hemisphere during this period, raises the possibility that Oi-1 represents a precursory bipolar glaciation. Here we test this hypothesis using an isotope-capable global climate/ice-sheet model that accommodates both the long-term decline of Cenozoic atmospheric CO(2) levels and the effects of orbital forcing. We show that the CO(2) threshold below which glaciation occurs in the Northern Hemisphere ( approximately 280 p.p.m.v.) is much lower than that for Antarctica ( approximately 750 p.p.m.v.). Therefore, the growth of ice sheets in the Northern Hemisphere immediately following Antarctic glaciation would have required rapid CO(2) drawdown within the Oi-1 timeframe, to levels lower than those estimated by geochemical proxies and carbon-cycle models. Instead of bipolar glaciation, we find that Oi-1 is best explained by Antarctic glaciation alone, combined with deep-sea cooling of up to 4 degrees C and Antarctic ice that is less isotopically depleted (-30 to -35 per thousand) than previously suggested. Proxy CO(2) estimates remain above our model's northern-hemispheric glaciation threshold of approximately 280 p.p.m.v. until approximately 25 Myr

  18. Thresholds for Cenozoic bipolar glaciation.

    PubMed

    Deconto, Robert M; Pollard, David; Wilson, Paul A; Pälike, Heiko; Lear, Caroline H; Pagani, Mark

    2008-10-01

    The long-standing view of Earth's Cenozoic glacial history calls for the first continental-scale glaciation of Antarctica in the earliest Oligocene epoch ( approximately 33.6 million years ago), followed by the onset of northern-hemispheric glacial cycles in the late Pliocene epoch, about 31 million years later. The pivotal early Oligocene event is characterized by a rapid shift of 1.5 parts per thousand in deep-sea benthic oxygen-isotope values (Oi-1) within a few hundred thousand years, reflecting a combination of terrestrial ice growth and deep-sea cooling. The apparent absence of contemporaneous cooling in deep-sea Mg/Ca records, however, has been argued to reflect the growth of more ice than can be accommodated on Antarctica; this, combined with new evidence of continental cooling and ice-rafted debris in the Northern Hemisphere during this period, raises the possibility that Oi-1 represents a precursory bipolar glaciation. Here we test this hypothesis using an isotope-capable global climate/ice-sheet model that accommodates both the long-term decline of Cenozoic atmospheric CO(2) levels and the effects of orbital forcing. We show that the CO(2) threshold below which glaciation occurs in the Northern Hemisphere ( approximately 280 p.p.m.v.) is much lower than that for Antarctica ( approximately 750 p.p.m.v.). Therefore, the growth of ice sheets in the Northern Hemisphere immediately following Antarctic glaciation would have required rapid CO(2) drawdown within the Oi-1 timeframe, to levels lower than those estimated by geochemical proxies and carbon-cycle models. Instead of bipolar glaciation, we find that Oi-1 is best explained by Antarctic glaciation alone, combined with deep-sea cooling of up to 4 degrees C and Antarctic ice that is less isotopically depleted (-30 to -35 per thousand) than previously suggested. Proxy CO(2) estimates remain above our model's northern-hemispheric glaciation threshold of approximately 280 p.p.m.v. until approximately 25 Myr

  19. Ambient Fine Particulate Matter and Mortality among Survivors of Myocardial Infarction: Population-Based Cohort Study

    PubMed Central

    Chen, Hong; Burnett, Richard T.; Copes, Ray; Kwong, Jeffrey C.; Villeneuve, Paul J.; Goldberg, Mark S.; Brook, Robert D.; van Donkelaar, Aaron; Jerrett, Michael; Martin, Randall V.; Brook, Jeffrey R.; Kopp, Alexander; Tu, Jack V.

    2016-01-01

    Background: Survivors of acute myocardial infarction (AMI) are at increased risk of dying within several hours to days following exposure to elevated levels of ambient air pollution. Little is known, however, about the influence of long-term (months to years) air pollution exposure on survival after AMI. Objective: We conducted a population-based cohort study to determine the impact of long-term exposure to fine particulate matter ≤ 2.5 μm in diameter (PM2.5) on post-AMI survival. Methods: We assembled a cohort of 8,873 AMI patients who were admitted to 1 of 86 hospital corporations across Ontario, Canada in 1999–2001. Mortality follow-up for this cohort extended through 2011. Cumulative time-weighted exposures to PM2.5 were derived from satellite observations based on participants’ annual residences during follow-up. We used standard and multilevel spatial random-effects Cox proportional hazards models and adjusted for potential confounders. Results: Between 1999 and 2011, we identified 4,016 nonaccidental deaths, of which 2,147 were from any cardiovascular disease, 1,650 from ischemic heart disease, and 675 from AMI. For each 10-μg/m3 increase in PM2.5, the adjusted hazard ratio (HR10) of nonaccidental mortality was 1.22 [95% confidence interval (CI): 1.03, 1.45]. The association with PM2.5 was robust to sensitivity analyses and appeared stronger for cardiovascular-related mortality: ischemic heart (HR10 = 1.43; 95% CI: 1.12, 1.83) and AMI (HR10 = 1.64; 95% CI: 1.13, 2.40). We estimated that 12.4% of nonaccidental deaths (or 497 deaths) could have been averted if the lowest measured concentration in an urban area (4 μg/m3) had been achieved at all locations over the course of the study. Conclusions: Long-term air pollution exposure adversely affects the survival of AMI patients. Citation: Chen H, Burnett RT, Copes R, Kwong JC, Villeneuve PJ, Goldberg MS, Brook RD, van Donkelaar A, Jerrett M, Martin RV, Brook JR, Kopp A, Tu JV. 2016. Ambient fine

  20. Population-based 3D genome structure analysis reveals driving forces in spatial genome organization

    PubMed Central

    Li, Wenyuan; Kalhor, Reza; Dai, Chao; Hao, Shengli; Gong, Ke; Zhou, Yonggang; Li, Haochen; Zhou, Xianghong Jasmine; Le Gros, Mark A.; Larabell, Carolyn A.; Chen, Lin; Alber, Frank

    2016-01-01

    Conformation capture technologies (e.g., Hi-C) chart physical interactions between chromatin regions on a genome-wide scale. However, the structural variability of the genome between cells poses a great challenge to interpreting ensemble-averaged Hi-C data, particularly for long-range and interchromosomal interactions. Here, we present a probabilistic approach for deconvoluting Hi-C data into a model population of distinct diploid 3D genome structures, which facilitates the detection of chromatin interactions likely to co-occur in individual cells. Our approach incorporates the stochastic nature of chromosome conformations and allows a detailed analysis of alternative chromatin structure states. For example, we predict and experimentally confirm the presence of large centromere clusters with distinct chromosome compositions varying between individual cells. The stability of these clusters varies greatly with their chromosome identities. We show that these chromosome-specific clusters can play a key role in the overall chromosome positioning in the nucleus and stabilizing specific chromatin interactions. By explicitly considering genome structural variability, our population-based method provides an important tool for revealing novel insights into the key factors shaping the spatial genome organization. PMID:26951677

  1. Acetaminophen Poisoning and Risk of Acute Pancreatitis: A Population-Based Cohort Study.

    PubMed

    Chen, Sy-Jou; Lin, Chin-Sheng; Hsu, Chin-Wang; Lin, Cheng-Li; Kao, Chia-Hung

    2015-07-01

    The aim of this study was to assess whether acetaminophen poisoning is associated with a higher risk of acute pancreatitis. We conducted a retrospective cohort study by using the longitudinal population-based database of Taiwan's National Health Insurance (NHI) program between 2000 and 2011. The acetaminophen cohort comprised patients aged ≥ 20 years with newly identified acetaminophen poisoning (N = 2958). The comparison cohort comprised randomly selected patients with no history of acetaminophen poisoning. The acetaminophen and comparison cohorts were frequency matched by age, sex, and index year (N = 11,832) at a 1:4 ratio. Each patient was followed up from the index date until the date an acute pancreatitis diagnosis was made, withdrawal from the NHI program, or December 31, 2011. Cox proportional hazard regression models were used to determine the effects of acetaminophen on the risk of acute pancreatitis.The risk of acute pancreatitis was 3.11-fold higher in the acetaminophen cohort than in the comparison cohort (11.2 vs 3.61 per 10,000 person-years), with an adjusted hazard ratio of 2.40 (95% confidence interval, 1.29-4.47). The incidence rate was considerably high in patients who were aged 35 to 49 years, men, those who had comorbidities, and within the first year of follow-up.Acetaminophen poisoning is associated with an increased risk of acute pancreatitis. Additional prospective studies are necessary to verify how acetaminophen poisoning affects the risk of acute pancreatitis.

  2. Predicting successful aging in a population-based sample of georgia centenarians.

    PubMed

    Arnold, Jonathan; Dai, Jianliang; Nahapetyan, Lusine; Arte, Ankit; Johnson, Mary Ann; Hausman, Dorothy; Rodgers, Willard L; Hensley, Robert; Martin, Peter; Macdonald, Maurice; Davey, Adam; Siegler, Ilene C; Jazwinski, S Michal; Poon, Leonard W

    2010-01-01

    Used a population-based sample (Georgia Centenarian Study, GCS), to determine proportions of centenarians reaching 100 years as (1) survivors (43%) of chronic diseases first experienced between 0-80 years of age, (2) delayers (36%) with chronic diseases first experienced between 80-98 years of age, or (3) escapers (17%) with chronic diseases only at 98 years of age or older. Diseases fall into two morbidity profiles of 11 chronic diseases; one including cardiovascular disease, cancer, anemia, and osteoporosis, and another including dementia. Centenarians at risk for cancer in their lifetime tended to be escapers (73%), while those at risk for cardiovascular disease tended to be survivors (24%), delayers (39%), or escapers (32%). Approximately half (43%) of the centenarians did not experience dementia. Psychiatric disorders were positively associated with dementia, but prevalence of depression, anxiety, and psychoses did not differ significantly between centenarians and an octogenarian control group. However, centenarians were higher on the Geriatric Depression Scale (GDS) than octogenarians. Consistent with our model of developmental adaptation in aging, distal life events contribute to predicting survivorship outcome in which health status as survivor, delayer, or escaper appears as adaptation variables late in life. PMID:20885919

  3. Inverse Association of Parkinson Disease With Systemic Lupus Erythematosus: A Nationwide Population-based Study.

    PubMed

    Liu, Feng-Cheng; Huang, Wen-Yen; Lin, Te-Yu; Shen, Chih-Hao; Chou, Yu-Ching; Lin, Cheng-Li; Lin, Kuen-Tze; Kao, Chia-Hung

    2015-11-01

    The effects of the inflammatory mediators involved in systemic lupus erythematous (SLE) on subsequent Parkinson disease have been reported, but no relevant studies have focused on the association between the 2 diseases. This nationwide population-based study evaluated the risk of Parkinson disease in patients with SLE.We identified 12,817 patients in the Taiwan National Health Insurance database diagnosed with SLE between 2000 and 2010 and compared the incidence rate of Parkinson disease among these patients with that among 51,268 randomly selected age and sex-matched non-SLE patients. A Cox multivariable proportional-hazards model was used to evaluate the risk factors of Parkinson disease in the SLE cohort.We observed an inverse association between a diagnosis of SLE and the risk of subsequent Parkinson disease, with the crude hazard ratio (HR) being 0.60 (95% confidence interval 0.45-0.79) and adjusted HR being 0.68 (95% confidence interval 0.51-0.90). The cumulative incidence of Parkinson disease was 0.83% lower in the SLE cohort than in the non-SLE cohort. The adjusted HR of Parkinson disease decreased as the follow-up duration increased and was decreased among older lupus patients with comorbidity.We determined that patients with SLE had a decreased risk of subsequent Parkinson disease. Further research is required to elucidate the underlying mechanism.

  4. Burden of self-reported acute gastrointestinal illness in China: a population-based survey

    PubMed Central

    2013-01-01

    Background Acute gastrointestinal illness (AGI) is an important public-health problem worldwide. Previous national studies of the incidence of AGI in China were performed decades ago, and detailed information was not available. This study therefore sought to determine the magnitude, distribution, and burden of self-reported AGI in China. Methods Twelve-month, retrospective face-to-face surveys were conducted in 20 sentinel sites from six provinces between July 2010 and July 2011. Results In total, 39686 interviews were completed. The overall adjusted monthly prevalence of AGI was 4.2% (95% confidence interval, 4.0–4.4), corresponding to 0.56 episodes of AGI per person-year. Rates of AGI were highest in children aged < 5 years. Healthcare was sought by 56.1% of those reporting illness. Of the cases who visited a doctor, 32.7% submitted a stool sample. The use of antibiotics was reported by 49.7% of the cases who sought medical care and 54.0% took antidiarrhoeals. In the multivariable model, gender, age, education, household type, residence, season, province and travel were significant risk factors of being a case of AGI. Conclusions This first population-based study in China indicated that AGI represents a substantial burden of health. Further research into the specific pathogens is needed to better estimate the burden of AGI and foodborne disease in China. PMID:23656835

  5. Sleep and academic performance in later adolescence: results from a large population-based study.

    PubMed

    Hysing, Mari; Harvey, Allison G; Linton, Steven J; Askeland, Kristin G; Sivertsen, Børge

    2016-06-01

    The aim of the current study was to assess the association between sleep duration and sleep patterns and academic performance in 16-19 year-old adolescents using registry-based academic grades. A large population-based study from Norway conducted in 2012, the youth@hordaland-survey, surveyed 7798 adolescents aged 16-19 years (53.5% girls). The survey was linked with objective outcome data on school performance. Self-reported sleep measures provided information on sleep duration, sleep efficiency, sleep deficit and bedtime differences between weekday and weekend. School performance [grade point average (GPA)] was obtained from official administrative registries. Most sleep parameters were associated with increased risk for poor school performance. After adjusting for sociodemographic information, short sleep duration and sleep deficit were the sleep measures with the highest odds of poor GPA (lowest quartile). Weekday bedtime was associated significantly with GPA, with adolescents going to bed between 22:00 and 23:00 hours having the best GPA. Also, delayed sleep schedule during weekends was associated with poor academic performance. The associations were somewhat reduced after additional adjustment for non-attendance at school, but remained significant in the fully adjusted models. In conclusion, the demonstrated relationship between sleep problems and poor academic performance suggests that careful assessment of sleep is warranted when adolescents are underperforming at school. Future studies are needed on the association between impaired sleep in adolescence and later functioning in adulthood. PMID:26825591

  6. Associating optical measurements of MEO and GEO objects using Population-Based Meta-Heuristic methods

    NASA Astrophysics Data System (ADS)

    Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.

    2016-11-01

    Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.

  7. Incidence of Hidradenitis Suppurativa and Associated Factors: A Population-Based Study of Olmsted County, Minnesota

    PubMed Central

    Vazquez, Benjamin G.; Alikhan, Ali; Weaver, Amy L.; Wetter, David A.; Davis, Mark D.

    2012-01-01

    There are no population-based incidence studies of hidradenitis suppurativa (HS). Using the medical records linkage system of the Rochester Epidemiology Project, we sought to determine incidence, as well as other associations and characteristics, for HS patients diagnosed in Olmsted County, Minnesota between 1968 and 2008. Incidence was estimated using the decennial census data for the county. Logistic regression models were fit to evaluate associations between patient characteristics and disease severity. A total of 268 incident cases were identified, with an overall annual age- and sex-adjusted incidence of 6.0 per 100,000. Age-adjusted incidence was significantly higher in women compared to men [8.2 (95% CI, 7.0–9.3) vs. 3.8 (95% CI, 3.0–4.7)]. The highest incidence was among young women aged 20–29 (18.4 per 100,000). The incidence has risen over the past four decades, particularly among women. Women were more likely to have axillary and upper anterior torso involvement, while men were more likely to have perineal or perianal disease. Additionally, 54.9% (140/255) patients were obese; 70.2% were current or former smokers; 42.9% carried a diagnosis of depression; 36.2% carried a diagnosis of acne; and 6% had pilonidal disease. Smoking and gender were significantly associated with more severe disease. PMID:22931916

  8. Colorectal cancer risk following adenoma removal: a large prospective population-based cohort study

    PubMed Central

    Coleman, Helen G.; Loughrey, Maurice B.; Murray, Liam J.; Johnston, Brian T.; Gavin, Anna T.; Shrubsole, Martha J.; Bhat, Shivaram K.; Allen, Patrick B.; McConnell, Vivienne; Cantwell, Marie M.

    2015-01-01

    Background Randomised controlled trials have demonstrated significant reductions in colorectal cancer (CRC) incidence and mortality associated with polypectomy. However, little is known about whether polypectomy is effective at reducing CRC risk in routine clinical practice. The aim of this investigation was to quantify CRC risk following polypectomy in a large prospective population-based cohort study. Methods Patients with incident colorectal polyps between 2000 and 2005 in Northern Ireland (NI) were identified via electronic pathology reports received to the NI Cancer Registry (NICR). Patients were matched to the NICR to detect CRC and deaths up to 31st December 2010. CRC standardised incidence ratios (SIRs) were calculated and Cox proportional hazards modelling applied to determine CRC risk. Results During 44,724 person-years of follow-up, 193 CRC cases were diagnosed amongst 6,972 adenoma patients, representing an annual progression rate of 0.43%. CRC risk was significantly elevated in patients who had an adenoma removed (SIR 2.85; 95% CI: 2.61 to 3.25) compared with the general population. Male sex, older age, rectal site and villous architecture were associated with an increased CRC risk in adenoma patients. Further analysis suggested that not having a full colonoscopy performed at, or following, incident polypectomy contributed to the excess CRC risk. Conclusions CRC risk was elevated in individuals following polypectomy for adenoma, outside of screening programmes. Impact This finding emphasises the need for full colonoscopy and adenoma clearance, and appropriate surveillance, after endoscopic diagnosis of adenoma. PMID:26082403

  9. The uptake of active surveillance for the management of prostate cancer: A population-based analysis

    PubMed Central

    Richard, Patrick O.; Alibhai, Shabbir M.H.; Panzarella, Tony; Klotz, Laurence; Komisarenko, Maria; Fleshner, Neil E.; Urbach, David; Finelli, Antonio

    2016-01-01

    Introduction: Active surveillance (AS) is a strategy for the management of low-risk prostate cancer (PCa). However, few studies have assessed the uptake of AS at a population level and none of these were based on a Canadian population. Therefore, our objectives were to estimate the proportion of men being managed by AS in Ontario and to assess the factors associated with its uptake. Methods: This was a retrospective, population-based study using administrative databases from the province of Ontario to identify men ≤75 years diagnosed with localized PCa between 2002 and 2010. Descriptive statistics were used to estimate the proportion of men managed by AS, whereas mixed models were used to assess the factors associated with the uptake of AS. Results: 45 691 men met our inclusion criteria. Of these, 18% were managed by AS. Over time, the rates of AS increased significantly from 11% to 21% (p<0.001). Older age, residing in an urban centre, being diagnosed in the later years of the study period, having a neighborhood income in the highest quintile, and being managed by urologists were all associated with greater odds of receiving AS. Conclusions: There has been a steady increase in the uptake of AS between 2002 and 2010. However, only 18% of men diagnosed with localized PCa were managed by AS during the study period. The decisions to adopt AS were influenced by several individual and physician characteristics. The data suggest that there is significant opportunity for more widespread adoption of AS. PMID:27800055

  10. UNC13A influences survival in Italian ALS patients: a population-based study

    PubMed Central

    Chiò, Adriano; Mora, Gabriele; Restagno, Gabriella; Brunetti, Maura; Ossola, Irene; Barberis, Marco; Ferrucci, Luigi; Canosa, Antonio; Manera, Umberto; Moglia, Cristina; Fuda, Giuseppe; Traynor, Bryan J.; Calvo, Andrea

    2012-01-01

    The common variant rs12608932, located within an intron of UNC13A gene on chromosome 19p13.3, has been suggested to influence susceptibility to ALS, as well as survival, in patients of north European descent. To examine this possibility further, we evaluated the association of rs12608932 with susceptibility and survival in a population-based cohort of 500 Italian ALS patients and 1,457 Italian control samples. Although rs12608932 was not associated to ALS susceptibility in our series (p=0.124), it was significantly associated with survival under the recessive model (median survival for AA/AC genotypes = 3.5 years [IQR 2.2–6.4]; CC = 2.5 years [IQR 1.6–4.2]; p=0.017). Furthermore, rs12608932 genotype remained an independent prognostic factor in Cox multivariable analysis adjusting for other factors known to influence survival (p=0.023). Overall, minor allele carrier status of rs12608932 was strongly associated with an ~1-year reduction of survival in ALS patients, making it a significant determinant of phenotype variation. The identification of UNC13A as a modifier of prognosis among sporadic ALS patients potentially provides a new therapeutic target aimed at slowing disease progression. PMID:22921269

  11. Physical Trauma and Amyotrophic Lateral Sclerosis: A Population-Based Study Using Danish National Registries.

    PubMed

    Seals, Ryan M; Hansen, Johnni; Gredal, Ole; Weisskopf, Marc G

    2016-02-15

    Prior studies have suggested that physical trauma might be associated with the development of amyotrophic lateral sclerosis (ALS). We conducted a population-based, individually matched case-control study in Denmark to assess whether hospitalization for trauma is associated with a higher risk of developing ALS. There were 3,650 incident cases of ALS in the Danish National Patient Register from 1982 to 2009. We used risk-set sampling to match each case to 100 age- and sex-matched population controls alive on the date of the case's diagnosis. Odds ratios and 95% confidence intervals were calculated using a conditional logistic regression model. History of trauma diagnosis was also obtained from the Danish Patient Register. When traumas in the 5 years prior to the index date were excluded, there was a borderline association between any trauma and ALS (odds ratio (OR) = 1.09, 95% confidence interval (CI): 0.99, 1.19). A first trauma before age 55 years was associated with ALS (OR = 1.22, 95% CI: 1.08, 1.37), whereas first traumas at older ages were not (OR = 0.97, 95% CI: 0.85, 1.10). Our data suggest that physical trauma at earlier ages is associated with ALS risk. Age at first trauma could help explain discrepancies in results of past studies of trauma and ALS.

  12. Predicting successful aging in a population-based sample of georgia centenarians.

    PubMed

    Arnold, Jonathan; Dai, Jianliang; Nahapetyan, Lusine; Arte, Ankit; Johnson, Mary Ann; Hausman, Dorothy; Rodgers, Willard L; Hensley, Robert; Martin, Peter; Macdonald, Maurice; Davey, Adam; Siegler, Ilene C; Jazwinski, S Michal; Poon, Leonard W

    2010-01-01

    Used a population-based sample (Georgia Centenarian Study, GCS), to determine proportions of centenarians reaching 100 years as (1) survivors (43%) of chronic diseases first experienced between 0-80 years of age, (2) delayers (36%) with chronic diseases first experienced between 80-98 years of age, or (3) escapers (17%) with chronic diseases only at 98 years of age or older. Diseases fall into two morbidity profiles of 11 chronic diseases; one including cardiovascular disease, cancer, anemia, and osteoporosis, and another including dementia. Centenarians at risk for cancer in their lifetime tended to be escapers (73%), while those at risk for cardiovascular disease tended to be survivors (24%), delayers (39%), or escapers (32%). Approximately half (43%) of the centenarians did not experience dementia. Psychiatric disorders were positively associated with dementia, but prevalence of depression, anxiety, and psychoses did not differ significantly between centenarians and an octogenarian control group. However, centenarians were higher on the Geriatric Depression Scale (GDS) than octogenarians. Consistent with our model of developmental adaptation in aging, distal life events contribute to predicting survivorship outcome in which health status as survivor, delayer, or escaper appears as adaptation variables late in life.

  13. Population-based survival-cure analysis of ER-negative breast cancer.

    PubMed

    Huang, Lan; Johnson, Karen A; Mariotto, Angela B; Dignam, James J; Feuer, Eric J

    2010-08-01

    This study investigated the trends over time in age and stage specific population-based survival of estrogen receptor negative (ER-) breast cancer patients by examining the fraction of cured patients and the median survival time for uncured patients. Cause-specific survival data from the Surveillance, Epidemiology, and End Results program for cases diagnosed during 1992-1998 were used in mixed survival cure models to evaluate the cure fraction and the extension in survival for uncured patients. Survival trends were compared with adjuvant chemotherapy data available from an overlapping patterns-of-care study. For stage II N+ disease, the largest increase in cure fraction was 44-60% (P = 0.0257) for women aged >or=70 in contrast to a 7-8% point increase for women aged <50 or 50-69 (P = 0.056 and 0.038, respectively). For women with stage III disease, the increases in the cure fraction were not statistically significant, although women aged 50-69 had a 10% point increase (P = 0.103). Increases in cure fraction correspond with increases in the use of adjuvant chemotherapy, particularly for the oldest age group. In this article, for the first time, we estimate the cure fraction for ER- patients. We notice that at age >o5r=70, the accelerated increase in cure fraction from 1992 to 1998 for women with stage II N+ compared with stage III suggests a selective benefit for chemotherapy in the lower stage group.

  14. Europe's Other Poverty Measures: Absolute Thresholds Underlying Social Assistance

    ERIC Educational Resources Information Center

    Bavier, Richard

    2009-01-01

    The first thing many learn about international poverty measurement is that European nations apply a "relative" poverty threshold and that they also do a better job of reducing poverty. Unlike the European model, the "absolute" U.S. poverty threshold does not increase in real value when the nation's standard of living rises, even though it is…

  15. Population-Based Prospective Study of Cigarette Smoking and Risk of Incident Essential Tremor

    PubMed Central

    Louis, Elan D.; Benito-León, Julián; Bermejo-Pareja, Félix

    2009-01-01

    BACKGROUND Smoking cigarettes is associated with lower risk of Parkinson’s disease (PD). Despite the clinical links between PD and essential tremor (ET), there are few data on smoking in ET. One study showed an association between smoking and lower ET prevalence. We now study whether baseline smoking is associated with lower risk of incident ET. METHODS Using a population-based, cohort design, baseline cigarette smoking habits were assessed in 3,348 participants in an epidemiological study in Spain, among whom 77 developed incident ET. RESULTS There were 3,348 participants, among whom 397 (11.9%) were smokers at baseline. Five (6.5%) of 77 incident ET cases had been smokers at baseline compared with 392 (12.0%) of 3,271 controls (p = 0.14). Baseline pack-years were lower in incident ET cases than controls (9.2 ± 17.7 vs. 15.7 ± 28.4, p = 0.002). Participants were stratified into baseline pack-year tertiles and few incident ET cases were in the highest tertile (4 [5.2%] cases vs. 431 [13.2%] controls, p = 0.039). In Cox Proportional Hazards Models, highest baseline pack-year tertile was associated with lower risk of incident ET; those in the highest pack-year tertile were one-third as likely to develop ET when compared to non-smokers (RR = 0.37, 95% CI = 0.14–1.03, p = 0.057 [unadjusted model] and RR = 0.29, 95% CI = 0.09–0.90, p = 0.03 [adjusted model]). CONCLUSIONS We demonstrated an association between baseline heavy cigarette smoking and lower risk of incident ET. The biological basis for this association requires future investigation. PMID:18458228

  16. Prevalence of Hidradenitis Suppurativa (HS): A Population-Based Study in Olmsted County, Minnesota

    PubMed Central

    Shahi, Varun; Alikhan, Ali; Vazquez, Benjamin G.; Weaver, Amy L.; Davis, Mark D.

    2014-01-01

    BACKGROUND/AIMS Hidradenitis suppurativa (HS) is a follicular occlusion disorder occurring in apocrine-rich regions of the skin. Estimates of the prevalence of this disorder have not been population-based. We sought to provide population-based information on the prevalence of HS in Olmsted County, Minnesota as of 1/1/2009. METHODS Rochester Epidemiology Project, a unique infrastructure that combines and makes accessible all medical records in Olmsted County since the 1960s, was used to collect population-based data on the prevalence of HS. RESULTS We identified 178 confirmed cases of HS that included 135 females and 43 males, and estimated the total sex- and age-adjusted prevalence in Olmsted County to be 127.8 per 100,000 or 0.13%. The total prevalence was significantly higher among women than men. CONCLUSION This study represents the first population-based investigation on the prevalence of HS. In this population-based cohort, HS was less prevalent than previous reports have suggested. PMID:25228133

  17. Roots at the percolation threshold

    NASA Astrophysics Data System (ADS)

    Kroener, Eva; Ahmed, Mutez Ali; Carminati, Andrea

    2015-04-01

    The rhizosphere is the layer of soil around the roots where complex and dynamic interactions between plants and soil affect the capacity of plants to take up water. The physical properties of the rhizosphere are affected by mucilage, a gel exuded by roots. Mucilage can absorb large volumes of water, but it becomes hydrophobic after drying. We use a percolation model to describe the rewetting of dry rhizosphere. We find that at a critical mucilage concentration the rhizosphere becomes impermeable. The critical mucilage concentration depends on the radius of the soil particle size. Capillary rise experiments with neutron radiography prove that for concentrations below the critical mucilage concentration water could easily cross the rhizosphere, while above the critical concentration water could no longer percolate through it. Our studies, together with former observations of water dynamics in the rhizosphere, suggest that the rhizosphere is near the percolation threshold, where small variations in mucilage concentration sensitively alter the soil hydraulic conductivity. Is mucilage exudation a plant mechanism to efficiently control the rhizosphere conductivity and the access to water?

  18. Roots at the percolation threshold.

    PubMed

    Kroener, Eva; Ahmed, Mutez Ali; Carminati, Andrea

    2015-04-01

    The rhizosphere is the layer of soil around the roots where complex and dynamic interactions between plants and soil affect the capacity of plants to take up water. The physical properties of the rhizosphere are affected by mucilage, a gel exuded by roots. Mucilage can absorb large volumes of water, but it becomes hydrophobic after drying. We use a percolation model to describe the rewetting of dry rhizosphere. We find that at a critical mucilage concentration the rhizosphere becomes impermeable. The critical mucilage concentration depends on the radius of the soil particle size. Capillary rise experiments with neutron radiography prove that for concentrations below the critical mucilage concentration water could easily cross the rhizosphere, while above the critical concentration water could no longer percolate through it. Our studies, together with former observations of water dynamics in the rhizosphere, suggest that the rhizosphere is near the percolation threshold, where small variations in mucilage concentration sensitively alter the soil hydraulic conductivity. Is mucilage exudation a plant mechanism to efficiently control the rhizosphere conductivity and the access to water? PMID:25974526

  19. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?

    PubMed

    Little, M P

    2010-05-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose-response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported

  20. Parton distributions with threshold resummation

    NASA Astrophysics Data System (ADS)

    Bonvini, Marco; Marzani, Simone; Rojo, Juan; Rottoli, Luca; Ubiali, Maria; Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Hartland, Nathan P.

    2015-09-01

    We construct a set of parton distribution functions (PDFs) in which fixed-order NLO and NNLO calculations are supplemented with soft-gluon (threshold) resummation up to NLL and NNLL accuracy respectively, suitable for use in conjunction with any QCD calculation in which threshold resummation is included at the level of partonic cross sections. These resummed PDF sets, based on the NNPDF3.0 analysis, are extracted from deep-inelastic scattering, Drell-Yan, and top quark pair production data, for which resummed calculations can be consistently used. We find that, close to threshold, the inclusion of resummed PDFs can partially compensate the enhancement in resummed matrix elements, leading to resummed hadronic cross-sections closer to the fixed-order calculations. On the other hand, far from threshold, resummed PDFs reduce to their fixed-order counterparts. Our results demonstrate the need for a consistent use of resummed PDFs in resummed calculations.

  1. Increased Risk of Restless Legs Syndrome in Patients With Migraine: A Nationwide Population-Based Cohort Study.

    PubMed

    Yang, Fu-Chi; Lin, Te-Yu; Chen, Hsuan-Ju; Lee, Jiunn-Tay; Lin, Chun-Chieh; Huang, Wen-Yen; Chen, Hsin-Hung; Kao, Chia-Hung

    2016-02-01

    Previous studies suggest that an association between restless legs syndrome (RLS) and migraine exists. However, population-based data are unavailable in Asian cohorts. Our study thus aims to evaluate the association between migraine and RLS in a nationwide, population-based cohort in Taiwan and to examine the effects of age, sex, migraine subtype, and comorbidities on RLS development.Data from the Taiwan National Health Insurance Research Database were used. Patients aged 20 years or older with newly diagnosed migraine from 2000 to 2008 were included; 23,641 patients with newly diagnosed migraine and 94,564 subjects without migraine were randomly selected and followed until RLS development, withdrawal from the National Health Insurance, or until the end of 2011. A multivariate Cox proportional hazards regression model was used to explore the risk of RLS in patients with migraine after adjustment for demographic characteristics and comorbidities.Both cohorts were followed for a mean of 7.38 years. After adjustment for covariates, the risk of RLS was 1.42-fold higher (95% confidence interval = 1.13-1.79) in the migraine cohort than in the nonmigraine cohort (7.19 versus 3.42 years per 10,000 person-years). The increased risk was more prominent in males in the migraine cohort (1.87-fold increased risk, 95% confidence interval 1.22-2.85). Neither comorbidity status nor migraine subtype influenced the RLS risk.This population-based study demonstrated that migraine is associated with an increased risk of RLS compared with those without migraine, particularly in male patients with migraine and regardless of the comorbidity status.

  2. Use of BPPV processes in Emergency Department Dizziness Presentations: A Population-Based Study

    PubMed Central

    Kerber, Kevin A.; Burke, James F.; Skolarus, Lesli E.; Meurer, William J.; Callaghan, Brian C.; Brown, Devin L.; Lisabeth, Lynda D.; McLaughlin, Thomas J.; Fendrick, A. Mark; Morgenstern, Lewis B.

    2013-01-01

    Objective A common cause of dizziness, benign paroxysmal positional vertigo (BPPV), is effectively diagnosed and cured with the Dix-Hallpike test (DHT) and the canalith repositioning maneuver (CRM). We aimed to describe the use of these processes in Emergency Departments (ED), to assess for trends in use over time, and to determine provider level variability in use. Design Prospective population-based surveillance study Setting EDs in Nueces County, Texas, January 15, 2008 to January 14, 2011 Subjects and Methods Adult patients discharged from EDs with dizziness, vertigo, or imbalance documented at triage. Clinical information was abstracted from source documents. A hierarchical logistic regression model adjusting for patient and provider characteristics was used to estimate trends in DHT use and provider level variability. Results 3,522 visits for dizziness were identified. A DHT was documented in 137 visits (3.9%). A CRM was documented in 8 visits (0.2%). Among patients diagnosed with BPPV, a DHT was documented in only 21.8% (34 of 156) and a CRM in 3.9% (6 of 156). In the hierarchical model (c statistic = 0.93), DHT was less likely to be used over time (odds ratio, 0.97, 95% CI [0.95, 0.99]) and the provider level explained 50% (ICC, 0.50) of the variance in the probability of DHT use. Conclusion BPPV is seldom examined for, and when diagnosed, infrequently treated in this ED population. DHT use is decreasing over time, and varies substantially by provider. Implementation research focused on BPPV care may be an opportunity to optimize management in ED dizziness presentations. PMID:23264119

  3. Air Pollution and Newly Diagnostic Autism Spectrum Disorders: A Population-Based Cohort Study in Taiwan

    PubMed Central

    Jung, Chau-Ren; Lin, Yu-Ting; Hwang, Bing-Fang

    2013-01-01

    There is limited evidence that long-term exposure to ambient air pollution increases the risk of childhood autism spectrum disorder (ASD). The objective of the study was to investigate the associations between long-term exposure to air pollution and newly diagnostic ASD in Taiwan. We conducted a population-based cohort of 49,073 children age less than 3 years in 2000 that were retrieved from Taiwan National Insurance Research Database and followed up from 2000 through 2010. Inverse distance weighting method was used to form exposure parameter for ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and particles with aerodynamic diameter less than 10 µm (PM10). Time-dependent Cox proportional hazards (PH) model was performed to evaluate the relationship between yearly average exposure air pollutants of preceding years and newly diagnostic ASD. The risk of newly diagnostic ASD increased according to increasing O3, CO, NO2, and SO2 levels. The effect estimate indicating an approximately 59% risk increase per 10 ppb increase in O3 level (95% CI 1.42–1.79), 37% risk increase per 10 ppb in CO (95% CI 1.31–1.44), 340% risk increase per 10 ppb increase in NO2 level (95% CI 3.31–5.85), and 17% risk increase per 1 ppb in SO2 level (95% CI 1.09–1.27) was stable with different combinations of air pollutants in the multi-pollutant models. Our results provide evident that children exposure to O3, CO, NO2, and SO2 in the preceding 1 year to 4 years may increase the risk of ASD diagnosis. PMID:24086549

  4. Direct costs in impaired glucose regulation: results from the population-based Heinz Nixdorf Recall study

    PubMed Central

    Bächle, C; Claessen, H; Andrich, S; Brüne, M; Dintsios, C M; Slomiany, U; Roggenbuck, U; Jöckel, K H; Moebus, S; Icks, A

    2016-01-01

    Objective For the first time, this population-based study sought to analyze healthcare utilization and associated costs in people with normal fasting glycemia (NFG), impaired fasting glycemia (IFG), as well as previously undetected diabetes and previously diagnosed diabetes linking data from the prospective German Heinz Nixdorf Recall (HNR) study with individual claims data from German statutory health insurances. Research design and methods A total of 1709 participants of the HNR 5-year follow-up (mean age (SD) 64.9 (7.5) years, 44.5% men) were included in the study. Age-standardized and sex-standardized healthcare utilization and associated costs (reported as € for the year 2008, perspective of the statutory health insurance) were stratified by diabetes stage defined by the participants' self-report and fasting plasma glucose values. Cost ratios (CRs) were estimated using two-part regression models, adjusting for age, sex, sociodemographic variables and comorbidity. Results The mean total direct healthcare costs for previously diagnosed diabetes, previously undetected diabetes, IFG, and NFG were €2761 (95% CI 2378 to 3268), €2210 (1483 to 4279), €2035 (1732 to 2486) and €1810 (1634 to 2035), respectively. Corresponding age-adjusted and sex-adjusted CRs were 1.53 (1.30 to 1.80), 1.16 (0.91 to 1.47), and 1.09 (0.95 to 1.25) (reference: NFG). Inpatient, outpatient and medication costs varied in order between people with IFG and those with previously undetected diabetes. Conclusions The study provides claims-based detailed cost data in well-defined glucose metabolism subgroups. CRs of individuals with IFG and previously undetected diabetes were surprisingly low. Data are important for the model-based evaluation of screening programs and interventions that are aimed either to prevent diabetes onset or to improve diabetes therapy as well. PMID:27252871

  5. Radiotherapy and Survival in Prostate Cancer Patients: A Population-Based Study

    SciTech Connect

    Zhou, Esther H. Ellis, Rodney J.; Cherullo, Edward; Colussi, Valdir; Xu Fang; Chen Weidong; Gupta, Sanjay; Whalen, Christopher C.; Bodner, Donald; Resnick, Martin I.; Rimm, Alfred A.

    2009-01-01

    Purpose: To investigate the association of overall and disease-specific survival with the five standard treatment modalities for prostate cancer (CaP): radical prostatectomy (RP), brachytherapy (BT), external beam radiotherapy, androgen deprivation therapy, and no treatment (NT) within 6 months after CaP diagnosis. Methods and Materials: The study population included 10,179 men aged 65 years and older with incident CaP diagnosed between 1999 and 2001. Using the linked Ohio Cancer Incidence Surveillance System, Medicare, and death certificate files, overall and disease-specific survival through 2005 among the five clinically accepted therapies were analyzed. Results: Disease-specific survival rates were 92.3% and 23.9% for patients with localized vs. distant disease at 7 years, respectively. Controlling for age, race, comorbidities, stage, and Gleason score, results from the Cox multiple regression models indicated that the risk of CaP-specific death was significantly reduced in patients receiving RP or BT, compared with NT. For localized disease, compared with NT, in the monotherapy cohort, RP and BT were associated with reduced hazard ratios (HR) of 0.25 and 0.45 (95% confidence intervals 0.13-0.48 and 0.23-0.87, respectively), whereas in the combination therapy cohort, HR were 0.40 (0.17-0.94) and 0.46 (0.27-0.80), respectively. Conclusions: The present population-based study indicates that RP and BT are associated with improved survival outcomes. Further studies are warranted to improve clinical determinates in the selection of appropriate management of CaP and to improve predictive modeling for which patient subsets may benefit most from definitive therapy vs. conservative management and/or observation.

  6. Semiclassical approaches to below-threshold harmonics

    SciTech Connect

    Hostetter, James A.; Tate, Jennifer L.; Schafer, Kenneth J.; Gaarde, Mette B.

    2010-08-15

    We study the generation of below-threshold harmonics in a model atom by extending the three-step semiclassical model of harmonic generation to include effects of the atomic potential. We explore the generalization of semiclassical trajectories of the electron in the presence of the combined laser-atom potential and calculate the intensity-dependent dipole phase associated with these trajectories. Our results are in good agreement with fully quantum mechanical calculations, as well as with recent experimental observations. We show that the so-called long trajectory readily generalizes to below-threshold harmonic generation and is relatively insensitive to the choice of initial conditions. We also find that the short trajectory can only lead to low-energy harmonics for electrons that have been released close to the ion core in a process that is closer to multiphoton than to tunnel ionization.

  7. Interaction of noise-induced permanent threshold shift and age-related threshold shift.

    PubMed

    Mills, J H; Boettcher, F A; Dubno, J R

    1997-03-01

    Current medical-legal practices as well as an international standard (ISO 1999) assume the permanent threshold shifts produced by exposure to noise add (in dB) to the threshold shifts caused by increased chronological age (presbyacusis). This assumption, known as the additivity rule, was tested in an animal model. Mongolian gerbils, born and raised in a quiet vivarium, were exposed at age 18 months to a 3.5-kHz pure tone for 1 h at 113 dB SPL. At 6-weeks post-exposure, permanent threshold shifts in the exposed ear were approximately 20 dB in the 4- to 8-kHz region. Thresholds in the nonexposed, control ear were unaffected by the exposure. Animals were then allowed to age in the quiet vivarium until age 36 months and then were retested. Thus in a given animal, aging-only effects were assessed in one ear (internal control) and noise-plus-aging effects were assessed in the other (test) ear. A second control was mean age-related threshold shift measured in 48 gerbils who were born and raised in the quiet vivarium. This group is referred to as a non-noise-exposed population (population control). Using the additivity rule, predictions with either the internal or population control significantly overestimated noise-plus-aging effects. Use of the ISO 1999 compression factor reduced the overestimations by 0-5 dB. The intensity rule produced the most accurate predictions. These results suggest that the interaction of noise-induced permanent threshold shift and age-related threshold shift is not straightforward and that current medical-legal methods using the additivity rule overestimate the contribution of "noise effects". PMID:9069635

  8. Guidelines for Auditory Threshold Measurement for Significant Threshold Shift.

    PubMed

    Campbell, Kathleen; Hammill, Tanisha; Hoffer, Michael; Kil, Jonathan; Le Prell, Colleen

    2016-09-01

    The purpose of this article is to provide guidelines for determining a Significant Noise-Induced Threshold Shift in clinical trials involving human populations. The article reviews recommendations for the standards to be referenced for human subjects, equipment, test environment, and personnel. Additional guidelines for military populations are provided. Guidelines for the calibration of audiometers, sound booth noise levels, and immitance equipment are provided. In addition the guidance provides specific suggestions for the subjects history before study onset, and otoscopy.Test frequencies for threshold determination and methods of threshold determination are reviewed for both air conduction and bone conduction for both baseline testing and later determination of either a temporary (TTS) or permanent threshold shift (PTS). Once a Significant Noise-Induced Threshold Shift has been determined, subjects should be retested, conductive component should be ruled out or addressed, and the subject should be counseled or referred for additional medical evaluation. Guidance for reporting procedures and the computerized study database are described. Finally, experimental designs suggested for noise-induced otoprotection clinical trials are described.

  9. Guidelines for Auditory Threshold Measurement for Significant Threshold Shift.

    PubMed

    Campbell, Kathleen; Hammill, Tanisha; Hoffer, Michael; Kil, Jonathan; Le Prell, Colleen

    2016-09-01

    The purpose of this article is to provide guidelines for determining a Significant Noise-Induced Threshold Shift in clinical trials involving human populations. The article reviews recommendations for the standards to be referenced for human subjects, equipment, test environment, and personnel. Additional guidelines for military populations are provided. Guidelines for the calibration of audiometers, sound booth noise levels, and immitance equipment are provided. In addition the guidance provides specific suggestions for the subjects history before study onset, and otoscopy.Test frequencies for threshold determination and methods of threshold determination are reviewed for both air conduction and bone conduction for both baseline testing and later determination of either a temporary (TTS) or permanent threshold shift (PTS). Once a Significant Noise-Induced Threshold Shift has been determined, subjects should be retested, conductive component should be ruled out or addressed, and the subject should be counseled or referred for additional medical evaluation. Guidance for reporting procedures and the computerized study database are described. Finally, experimental designs suggested for noise-induced otoprotection clinical trials are described. PMID:27518134

  10. Intervention thresholds for osteoporosis in the UK.

    PubMed

    Kanis, John A; Borgstrom, Frederik; Zethraeus, Niklas; Johnell, Olof; Oden, Anders; Jönsson, Bengt

    2005-01-01

    The aim of this study was to determine the threshold of fracture probability at which interventions became cost-effective in women based on data from the UK. We modelled the effects of an intervention costing pound 350 per year given for 5 years that decreased the risk of all osteoporotic fractures by 35% followed by a waning of effect (offset time) for a further 5 years. Sensitivity analyses included a range of treatment duration (3-10 years), intervention costs (pound 300-400/year) and offset times (0-15 years). Data on costs and risks were from the UK. Costs included direct costs, but excluded indirect costs due to morbidity. A threshold for cost-effectiveness of pound 30,000/QALY gained was used. With the base case ( pound 350 per year; 35% efficacy) treatment in women was cost-effective with a 10-year hip fracture probability that ranged from 1.1% at the age of 50 years to 9.0% at the age of 85 years. Intervention thresholds were sensitive to the assumed costs and offset time. The exclusion of osteoporotic fractures other than hip fracture significantly increased the cost-effectiveness ratio because of the substantial morbidity from such other fractures, particularly at younger ages. Cost-effective scenarios were found for women at the threshold for osteoporosis from the age of 60 years. Treatment of established osteoporosis was cost-effective irrespective of age. We conclude that the inclusion of all osteoporotic fractures has a marked effect on intervention thresholds, that these vary with age and that available treatments can be targeted cost-effectively to individuals from the UK at moderately increased fracture risk.

  11. Population-based visual acuity in the presence of defocus well predicted by classical theory

    NASA Astrophysics Data System (ADS)

    Weeber, Henk A.; Featherstone, Kristen A.; Piers, Patricia A.

    2010-07-01

    According to classical theory, visual acuity (VA) can be modeled using the intersection of the eye's modulation transfer function with a retinal threshold function. To date, there have been limited attempts to validate this methodology by comparing theory with actual measured data. We use the methodology to predict the visual acuity in the presence of defocus of a population of cataract patients implanted with diffractive multifocal intraocular lenses. For the prediction, we used a set of physiological eye models that include chromatic and higher order aberrations. We found that the simulations correlated strongly to the clinical outcomes (R2=0.93). While the simulated VA of the eye models was systematically 0.05 logMAR units lower (better acuity) than the clinical results, this difference was independent of defocus (p=0.98). These results show that when the simple and straightforward classical theory is applied using physiological eye models, accurate predictions of the VA, and through-focus VA of a population can be made. This method may be suited for predicting visual performance of new cataract and refractive treatments.

  12. Unstable particles near threshold

    NASA Astrophysics Data System (ADS)

    Chway, Dongjin; Jung, Tae Hyun; Kim, Hyung Do

    2016-07-01

    We explore the physics of unstable particles when the mother particle's mass is approximately the sum of the masses of its daughter particles. In this case, the conventional wave function renormalization factor used for the narrow width approximation is ill-defined. We propose a simple resolution of the problem that allows the use of the narrow width approximation by defining the wave function renormalization factor and the branching ratio in terms of the spectral density. We test new definitions by calculating the cross section in the Higgs portal model and a significant improvement is obtained. Meanwhile, no single decay width can be assigned to the unstable particles and non-exponential decay occurs at all time scales.

  13. An Evaluation of a Threshold Theory for Personality Assessment

    ERIC Educational Resources Information Center

    Voyce, Colleen D.; Jackson, Douglas N.

    1977-01-01

    A model designed to account for major factors on personality questionnaires is proposed and evaluated using the Differential Personality Inventory. Two respondent processes are postulated: sensitivity to the underlying desirability of items, and threshold for responding desirably. (Author/JKS)

  14. Population-Based Study of QT Interval Prolongation in Patients with Rheumatoid Arthritis

    PubMed Central

    Chauhan, Krati; Ackerman, Michael J.; Crowson, Cynthia S.; Matteson, Eric L.; Gabriel, Sherine E.

    2015-01-01

    Background Patients with rheumatoid arthritis (RA) are at increased risk of cardiovascular morbidity and mortality. Heart rate corrected QT interval (QTc) (which is obtained from a 12-lead electrocardiogram (ECG) and reflects ventricular repolarization duration) is a strong predictor of cardiovascular mortality. Our primary purpose is to determine the impact of QTc prolongation on mortality in RA patients. Methods A population-based inception cohort of patients with RA fulfilling 1987 ACR criteria in 1988–2007 was identified, with an age- and sex-matched comparison cohort and followed until death, migration or 12-31-2008. Data were collected on ECG variables, medications known to prolong QT interval, electrolytes, cardiovascular risk factors and disease status and RA disease characteristics. Cox proportional hazards models were used to examine QTc prolongation as predictor of mortality. Results QTc prolongation prior to RA incidence/index date was similar in RA (15%) and non-RA (18%) subjects. During follow-up, the cumulative incidence of QTc prolongation was higher among RA (48% at 20 years after RA incidence) than non-RA (38% at 20 years after index date; p= 0.004). Idiopathic QTc prolongation (excluding prolongations explained by ECG changes, medications, etc.) was marginally associated with all-cause mortality (HR: 1.28; 95% CI: 0.91–1.81, p=0.16), but was not associated with cardiovascular mortality (HR: 1.10; 95% CI:0.43–2.86, p= 0.83) in RA. Conclusion RA patients have a significantly elevated risk of developing QTc prolongation. However, idiopathic prolonged QTc was only marginally associated with all-cause mortality in RA patients. The clinical implications of these findings in RA require further study. PMID:25572282

  15. Dietary patterns associated with fall-related fracture in elderly Japanese: a population based prospective study

    PubMed Central

    2010-01-01

    Background Diet is considered an important factor for bone health, but is composed of a wide variety of foods containing complex combinations of nutrients. Therefore we investigated the relationship between dietary patterns and fall-related fractures in the elderly. Methods We designed a population-based prospective survey of 1178 elderly people in Japan in 2002. Dietary intake was assessed with a 75-item food frequency questionnaire (FFQ), from which dietary patterns were created by factor analysis from 27 food groups. The frequency of fall-related fracture was investigated based on insurance claim records from 2002 until 2006. The relationship between the incidence of fall-related fracture and modifiable factors, including dietary patterns, were examined. The Cox proportional hazards regression model was used to examine the relationships between dietary patterns and incidence of fall-related fracture with adjustment for age, gender, Body Mass Index (BMI) and energy intake. Results Among 877 participants who agreed to a 4 year follow-up, 28 suffered from a fall-related fracture. Three dietary patterns were identified: mainly vegetable, mainly meat and mainly traditional Japanese. The moderately confirmed (see statistical methods) groups with a Meat pattern showed a reduced risk of fall-related fracture (Hazard ratio = 0.36, 95% CI = 0.13 - 0.94) after adjustment for age, gender, BMI and energy intake. The Vegetable pattern showed a significant risk increase (Hazard ratio = 2.67, 95% CI = 1.03 - 6.90) after adjustment for age, gender and BMI. The Traditional Japanese pattern had no relationship to the risk of fall-related fracture. Conclusions The results of this study have the potential to reduce fall-related fracture risk in elderly Japanese. The results should be interpreted in light of the overall low meat intake of the Japanese population. PMID:20513246

  16. Predicting mortality with biomarkers: a population-based prospective cohort study for elderly Costa Ricans

    PubMed Central

    2012-01-01

    Background Little is known about adult health and mortality relationships outside high-income nations, partly because few datasets have contained biomarker data in representative populations. Our objective is to determine the prognostic value of biomarkers with respect to total and cardiovascular mortality in an elderly population of a middle-income country, as well as the extent to which they mediate the effects of age and sex on mortality. Methods This is a prospective population-based study in a nationally representative sample of elderly Costa Ricans. Baseline interviews occurred mostly in 2005 and mortality follow-up went through December 2010. Sample size after excluding observations with missing values: 2,313 individuals and 564 deaths. Main outcome: prospective death rate ratios for 22 baseline biomarkers, which were estimated with hazard regression models. Results Biomarkers significantly predict future death above and beyond demographic and self-reported health conditions. The studied biomarkers account for almost half of the effect of age on mortality. However, the sex gap in mortality became several times wider after controlling for biomarkers. The most powerful predictors were simple physical tests: handgrip strength, pulmonary peak flow, and walking speed. Three blood tests also predicted prospective mortality: C-reactive protein (CRP), glycated hemoglobin (HbA1c), and dehydroepiandrosterone sulfate (DHEAS). Strikingly, high blood pressure (BP) and high total cholesterol showed little or no predictive power. Anthropometric measures also failed to show significant mortality effects. Conclusions This study adds to the growing evidence that blood markers for CRP, HbA1c, and DHEAS, along with organ-specific functional reserve indicators (handgrip, walking speed, and pulmonary peak flow), are valuable tools for identifying vulnerable elderly. The results also highlight the need to better understand an anomaly noted previously in other settings: despite the

  17. Shift-work and cardiovascular disease: a population-based 22-year follow-up study.

    PubMed

    Hublin, Christer; Partinen, Markku; Koskenvuo, Karoliina; Silventoinen, Karri; Koskenvuo, Markku; Kaprio, Jaakko

    2010-05-01

    Studies on the association between shift-work and cardiovascular disease (CVD), in particular coronary heart disease (CHD), have given conflicting results. In this prospective population-based study we assessed the association of shift-work with three endpoints: CHD mortality, disability retirement due to CVD, and incident hypertension. A cohort of 20,142 adults (the Finnish Twin Cohort) was followed from 1982 to 2003. Type of working time (daytime/nighttime/shift-work) was assessed by questionnaires in 1975 (response rate 89%) and in 1981 (84%). Causes of death, information on disability retirement and hypertension medication were obtained from nationwide official registers. Cox proportional hazard models were used to obtain hazard ratios (HR) for each endpoint by type of working time. Adjustments were made for 14 socio-demographic and lifestyle covariates. 76.9% were daytime workers and 9.5% shift-workers both in 1975 and in 1981. During the follow-up, 857 deaths due to CHD, 721 disability retirements due to CVD, and 2,642 new cases of medicated hypertension were observed. However, HRs for shift-work were not significant (mortality HR men 1.09 and women 1.22; retirement 1.15 and 0.96; hypertension 1.15 and 0.98, respectively). The results were essentially similar after full adjustments for all covariates. Within twin pairs, no association between shift work and outcome was observed. Our results do not support an association between shift-work and cardiovascular morbidity. PMID:20229313

  18. Epilepsy in Onchocerciasis Endemic Areas: Systematic Review and Meta-analysis of Population-Based Surveys

    PubMed Central

    Pion, Sébastien D. S.; Kaiser, Christoph; Boutros-Toni, Fernand; Cournil, Amandine; Taylor, Melanie M.; Meredith, Stefanie E. O.; Stufe, Ansgar; Bertocchi, Ione; Kipp, Walter; Preux, Pierre-Marie; Boussinesq, Michel

    2009-01-01

    Objective We sought to evaluate the relationship between onchocerciasis prevalence and that of epilepsy using available data collected at community level. Design We conducted a systematic review and meta-regression of available data. Data Sources Electronic and paper records on subject area ever produced up to February 2008. Review Methods We searched for population-based studies reporting on the prevalence of epilepsy in communities for which onchocerciasis prevalence was available or could be estimated. Two authors independently assessed eligibility and study quality and extracted data. The estimation of point prevalence of onchocerciasis was standardized across studies using appropriate correction factors. Variation in epilepsy prevalence was then analyzed as a function of onchocerciasis endemicity using random-effect logistic models. Results Eight studies from west (Benin and Nigeria), central (Cameroon and Central African Republic) and east Africa (Uganda, Tanzania and Burundi) met the criteria for inclusion and analysis. Ninety-one communities with a total population of 79,270 individuals screened for epilepsy were included in the analysis. The prevalence of epilepsy ranged from 0 to 8.7% whereas that of onchocerciasis ranged from 5.2 to 100%. Variation in epilepsy prevalence was consistent with a logistic function of onchocerciasis prevalence, with epilepsy prevalence being increased, on average, by 0.4% for each 10% increase in onchocerciasis prevalence. Conclusion These results give further evidence that onchocerciasis is associated with epilepsy and that the disease burden of onchocerciasis might have to be re-estimated by taking into account this relationship. PMID:19529767

  19. HIV testing in national population-based surveys: experience from the Demographic and Health Surveys.

    PubMed Central

    Mishra, Vinod; Vaessen, Martin; Boerma, J. Ties; Arnold, Fred; Way, Ann; Barrere, Bernard; Cross, Anne; Hong, Rathavuth; Sangha, Jasbir

    2006-01-01

    OBJECTIVES: To describe the methods used in the Demographic and Health Surveys (DHS) to collect nationally representative data on the prevalence of human immunodeficiency virus (HIV) and assess the value of such data to country HIV surveillance systems. METHODS: During 2001-04, national samples of adult women and men in Burkina Faso, Cameroon, Dominican Republic, Ghana, Mali, Kenya, United Republic of Tanzania and Zambia were tested for HIV. Dried blood spot samples were collected for HIV testing, following internationally accepted ethical standards. The results for each country are presented by age, sex, and urban versus rural residence. To estimate the effects of non-response, HIV prevalence among non-responding males and females was predicted using multivariate statistical models for those who were tested, with a common set of predictor variables. RESULTS: Rates of HIV testing varied from 70% among Kenyan men to 92% among women in Burkina Faso and Cameroon. Despite large differences in HIV prevalence between the surveys (1-16%), fairly consistent patterns of HIV infection were observed by age, sex and urban versus rural residence, with considerably higher rates in urban areas and in women, especially at younger ages. Analysis of non-response bias indicates that although predicted HIV prevalence tended to be higher in non-tested males and females than in those tested, the overall effects of non-response on the observed national estimates of HIV prevalence are insignificant. CONCLUSIONS: Population-based surveys can provide reliable, direct estimates of national and regional HIV seroprevalence among men and women irrespective of pregnancy status. Survey data greatly enhance surveillance systems and the accuracy of national estimates in generalized epidemics. PMID:16878227

  20. Genocide Exposure and Subsequent Suicide Risk: A Population-Based Study

    PubMed Central

    Levine, Stephen Z.; Levav, Itzhak; Yoffe, Rinat; Becher, Yifat; Pugachova, Inna

    2016-01-01

    The association between periods of genocide-related exposures and suicide risk remains unknown. Our study tests that association using a national population-based study design. The source population comprised of all persons born during1922-1945 in Nazi-occupied or dominated European nations, that immigrated to Israel by 1965, were identified in the Population Register (N = 220,665), and followed up for suicide to 2014, totaling 16,953,602 person-years. The population was disaggregated to compare a trauma gradient among groups that immigrated before (indirect, n = 20,612, 9%); during (partial direct, n = 17,037, 8%); or after (full direct, n = 183,016, 83%) exposure to the Nazi era. Also, the direct exposure groups were examined regarding pre- or post-natal exposure periods. Cox regression models were used to compute Hazard Ratios (HR) of suicide risk to compare the exposure groups, adjusting for confounding by gender, residential SES and history of psychiatric hospitalization. In the total population, only the partial direct exposure subgroup was at greater risk compared to the indirect exposure group (HR = 1.73, 95% CI, 1.10, 2.73; P < .05). That effect replicated in six sensitivity analyses. In addition, sensitivity analyses showed that exposure at ages 13 plus among females, and follow-up by years since immigration were associated with a greater risk; whereas in utero exposure among persons with no psychiatric hospitalization and early postnatal exposure among males were at a reduced risk. Tentative mechanisms impute biopsychosocial vulnerability and natural selection during early critical periods among males, and feelings of guilt and entrapment or defeat among females. PMID:26901411

  1. Occupational Chronic Obstructive Pulmonary Disease in a Danish Population-Based Study.

    PubMed

    Würtz, Else Toft; Schlünssen, Vivi; Malling, Tine Halsen; Hansen, Jens Georg; Omland, Øyvind

    2015-08-01

    The aim was to explore the impact of occupation on chronic obstructive pulmonary disease (COPD) in a cross-sectional population-based study among subjects aged 45 to 84 years. In a stratified sampling 89 general practitioners practices (GPP) in Denmark recruited 3106 males and 1636 females through the Danish Civil Registration System. COPD was defined by spirometry by the 2.5(th)-centile Lower Limit of Normal of FEV1 and FEV1/FVC. Information about smoking, occupational exposure and the respective occupations were obtained from questionnaires. Occupations followed the Danish adaptation of The International Standard Classification of Occupations, revision 1988 (DISCO-88). Exposure to vapour, gas, dust (organic and inorganic), and fume (VGDF) in each occupation (yes/no) was evaluated by two independent specialist in occupational medicine. Exposures were divided in no, low, medium, and high exposure as 0, < 5, 5-14, and ≥ 15 years in the job, respectively. Data was analysed by a mixed random effect logistic regression model. The age-standardised COPD study prevalence was 5.0%. Of 372 DISCO-88 codes 72 were identified with relevant exposure to VGDF. 46% of the participants reported at least one occupation with VGDF exposure. Adjusted for smoking, age, sex, and GPP a dose-dependent association of COPD was found among workers in jobs with high organic dust exposure, with OR 1.56 (95% CI 1.09-2.24). Restricted to agriculture the OR was 1.59 (95% CI: 1.08-2.33). No association was observed for workers in jobs with inorganic dust, fume/gas, or vapour exposures. In summary, occupational organic dust exposure was associated to the prevalence of COPD.

  2. Healthcare Costs Attributable to Hypertension: Canadian Population-Based Cohort Study.

    PubMed

    Weaver, Colin G; Clement, Fiona M; Campbell, Norm R C; James, Matthew T; Klarenbach, Scott W; Hemmelgarn, Brenda R; Tonelli, Marcello; McBrien, Kerry A

    2015-09-01

    Accurately documenting the current and future costs of hypertension is required to fully understand the potential economic impact of currently available and future interventions to prevent and treat hypertension. The objective of this work was to calculate the healthcare costs attributable to hypertension in Canada and to project these costs to 2020. Using population-based administrative data for the province of Alberta, Canada (>3 million residents) from 2002 to 2010, we identified individuals with and without diagnosed hypertension. We calculated their total healthcare costs and estimated costs attributable to hypertension using a regression model adjusting for comorbidities and sociodemographic factors. We then extrapolated hypertension-attributable costs to the rest of Canada and projected costs to the year 2020. Twenty-one percent of adults in Alberta had diagnosed hypertension in 2010, with a projected increase to 27% by 2020. The average individual with hypertension had annual healthcare costs of $5768, of which $2341 (41%) were attributed to hypertension. In Alberta, the healthcare costs attributable to hypertension were $1.4 billion in 2010. In Canada, the hypertension-attributable costs were estimated to be $13.9 billion in 2010, rising to $20.5 billion by 2020. The increase was ascribed to demographic changes (52%), increasing prevalence (16%), and increasing per-patient costs (32%). Hypertension accounts for a significant proportion of healthcare spending (10.2% of the Canadian healthcare budget) and is projected to rise even further. Interventions to prevent and treat hypertension may play a role in limiting this cost growth.

  3. Socioeconomic Status and Incidence of Traffic Accidents in Metropolitan Tehran: A Population-based Study

    PubMed Central

    Sehat, Mojtaba; Naieni, Kourosh Holakouie; Asadi-Lari, Mohsen; Foroushani, Abbas Rahimi; Malek-Afzali, Hossein

    2012-01-01

    Background: Population-based estimates of traffic accidents (TAs) are not readily available for developing countries. This study examined the contribution of socioeconomic status (SES) to the risk of TA among Iranian adults. Methods: A total of 64,200people aged ≥18years were identified from 2008 Urban Health Equity Assessment and Response Tool (Urban HEART) survey. 22,128 households were interviewed to estimate the overall annual incidence, severity and socioeconomic determinants of TAs for males and females in Iranian capital over the preceding year. Wealth index and house value index were constructed for economic measurement. Weighted estimates were computed adjusting for complex survey design. Logistic regression models were used to examine individual and SES measures as potential determinants of TAs in adults. Results: The overall incidence of traffic accident was 17.3(95% CI 16.0, 18.7) per 1000 per year. TA rate in men and women was 22.6(95% CI 20.6, 24.8) and 11.8(95% CI 10.4, 13.2), respectively. The overall TA mortality rate was 26.6(95% CI 13.4, 39.8) per 100,000 person-years, which was almost three times higher in men than that for women (40.4 vs. 12.1 per 100,000person-years). Lower economic level was associated with increased incidence and mortality of TA. Association between SES and incidence, and severity and mortality of TA were identified. Conclusion: TAs occur more in lower socioeconomic layers of the society. This should be taken seriously into consideration by policy makers, so that preventive programs aimed at behavioral modifications in the society are promoted to decrease the health and economic burden imposed by TAs. PMID:22448311

  4. Early Cognitive Deficits in Type 2 Diabetes: A Population-Based Study.

    PubMed

    Marseglia, Anna; Fratiglioni, Laura; Laukka, Erika J; Santoni, Giola; Pedersen, Nancy L; Bäckman, Lars; Xu, Weili

    2016-06-15

    Evidence links type 2 diabetes to dementia risk. However, our knowledge on the initial cognitive deficits in diabetic individuals and the factors that might promote such deficits is still limited. This study aimed to identify the cognitive domains initially impaired by diabetes and the factors that play a role in this first stage. Within the population-based Swedish National Study on Aging and Care-Kungsholmen, 2305 cognitively intact participants aged ≥60 y were identified. Attention/working memory, perceptual speed, category fluency, letter fluency, semantic memory, and episodic memory were assessed. Diabetes (controlled and uncontrolled) and prediabetes were ascertained by clinicians, who also collected information on vascular disorders (hypertension, heart diseases, and stroke) and vascular risk factors (VRFs, including smoking and overweight/obesity). Data were analyzed with linear regression models. Overall, 196 participants (8.5%) had diabetes, of which 144 (73.5%) had elevated glycaemia (uncontrolled diabetes); 571 (24.8%) persons had prediabetes. In addition, diabetes, mainly uncontrolled, was related to lower performance in perceptual speed (β - 1.10 [95% CI - 1.98, - 0.23]), category fluency (β - 1.27 [95% CI - 2.52, - 0.03]), and digit span forward (β - 0.35 [95% CI - 0.54, - 0.17]). Critically, these associations were present only among APOEɛ4 non-carriers. The associations of diabetes with perceptual speed and category fluency were present only among participants with VRFs or vascular disorders. Diabetes, especially uncontrolled diabetes, is associated with poorer performance in perceptual speed, category fluency, and attention/primary memory. VRFs, vascular disorders, and APOE status play a role in these associations. PMID:27314527

  5. Environmental risk factors in paediatric inflammatory bowel diseases: a population based case control study

    PubMed Central

    Baron, S; Turck, D; Leplat, C; Merle, V; Gower-Rousseau, C; Marti, R; Yzet, T; Lerebours, E; Dupas, J-L; Debeugny, S; Salomez, J-L; Cortot, A; Colombel, J-F

    2005-01-01

    Background: Environmental exposures in early life have been implicated in the aetiology of inflammatory bowel disease. Objective: To examine environmental risk factors prior to the development of inflammatory bowel disease in a paediatric population based case control study. Methods: A total of 222 incident cases of Crohn’s disease and 60 incident cases of ulcerative colitis occurring before 17 years of age between January 1988 and December 1997 were matched with one control subject by sex, age, and geographical location. We recorded 140 study variables in a questionnaire that covered familial history of inflammatory bowel disease, events during the perinatal period, infant and child diet, vaccinations and childhood diseases, household amenities, and the family’s socioeconomic status. Results: In a multivariate model, familial history of inflammatory bowel disease (odds ratio (OR) 4.3 (95% confidence interval 2.3–8)), breast feeding (OR 2.1 (1.3–3.4)), bacille Calmette-Guerin vaccination (OR 3.6 (1.1–11.9)), and history of eczema (OR 2.1 (1–4.5)) were significant risk factors for Crohn’s disease whereas regular drinking of tap water was a protective factor (OR 0.56 (0.3–1)). Familial history of inflammatory bowel disease (OR 12.5 (2.2–71.4)), disease during pregnancy (OR 8.9 (1.5–52)), and bedroom sharing (OR 7.1 (1.9–27.4)) were risk factors for ulcerative colitis whereas appendicectomy was a protective factor (OR 0.06 (0.01–0.36)). Conclusions: While family history and appendicectomy are known risk factors, changes in risk based on domestic promiscuity, certain vaccinations, and dietary factors may provide new aetiological clues. PMID:15710983

  6. Severity of malocclusion in adolescents: populational-based study in the north of Minas Gerais, Brazil

    PubMed Central

    Silveira, Marise Fagundes; Freire, Rafael Silveira; Nepomuceno, Marcela Oliveira; Martins, Andrea Maria Eleutério de Barros Lima; Marcopito, Luiz Francisco

    2016-01-01

    ABSTRACT OBJECTIVE To identify the factors associated with severity of malocclusion in a population of adolescents. METHODS In this cross-sectional population-based study, the sample size (n = 761) was calculated considering a prevalence of malocclusion of 50.0%, with a 95% confidence level and a 5.0% precision level. The study adopted correction for the effect of delineation (deff = 2), and a 20.0% increase to offset losses and refusals. Multistage probability cluster sampling was adopted. Trained and calibrated professionals performed the intraoral examinations and interviews in households. The dependent variable (severity of malocclusion) was assessed using the Dental Aesthetic Index (DAI). The independent variables were grouped into five blocks: demographic characteristics, socioeconomic condition, use of dental services, health-related behavior and oral health subjective conditions. The ordinal logistic regression model was used to identify the factors associated with severity of malocclusion. RESULTS We interviewed and examined 736 adolescents (91.5% response rate), 69.9% of whom showed no abnormalities or slight malocclusion. Defined malocclusion was observed in 17.8% of the adolescents, being severe or very severe in 12.6%, with pressing or essential need of orthodontic treatment. The probabilities of greater severity of malocclusion were higher among adolescents who self-reported as black, indigenous, pardo or yellow, with lower per capita income, having harmful oral habits, negative perception of their appearance and perception of social relationship affected by oral health. CONCLUSIONS Severe or very severe malocclusion was more prevalent among socially disadvantaged adolescents, with reported harmful habits and perception of compromised esthetics and social relationships. Given that malocclusion can interfere with the self-esteem of adolescents, it is essential to improve public policy for the inclusion of orthodontic treatment among health care

  7. Association between gastroesophageal reflux disease and coronary heart disease: A nationwide population-based analysis.

    PubMed

    Chen, Chien-Hua; Lin, Cheng-Li; Kao, Chia-Hung

    2016-07-01

    In this study, we aimed to determine the association between gastroesophageal reflux disease (GERD) and subsequent coronary heart disease (CHD) development, if any, and to evaluate whether longer use of proton pump inhibitors (PPIs) increases the risk of CHD.Patients diagnosed with GERD between 2000 and 2011 were identified as the study cohort (n = 12,960). Patients without GERD were randomly selected from the general population, frequency-matched with the study group according to age, sex, and index year, and evaluated as the comparison cohort (n = 51,840). Both cohorts were followed up until the end of 2011 to determine the incidence of CHD. The risk of CHD was evaluated in both groups by using Cox proportional hazards regression models.The GERD patients had a greater probability of CHD than the cohort without GERD did (log-rank test, P < 0.001 and 11.8 vs 6.5 per 1000 person-years). The GERD cohort had a higher risk of CHD than the comparison cohort did after adjustment for age, sex, hypertension, diabetes, hyperlipidemia, alcohol-related illness, stroke, chronic obstructive pulmonary disease, asthma, biliary stone, anxiety, depression, chronic kidney disease, and cirrhosis (adjusted hazard ratio [aHR]: 1.49, 95% confidence interval [CI]: 1.34-1.66). The risk of CHD was greater for the patients treated with PPIs for more than 1 year (aHR = 1.67, 95% CI = 1.34-2.08) than for those treated with PPIs for <1 year (aHR = 1.56, 95% CI = 1.39-1.74).Our population-based cohort study results indicate that GERD was associated with an increased risk of developing CHD, and that PPI use for more than 1 year might increase the risk of CHD. PMID:27399102

  8. Gout increases risk of fracture: A nationwide population-based cohort study.

    PubMed

    Tzeng, Huey-En; Lin, Che-Chen; Wang, I-Kuan; Huang, Po-Hao; Tsai, Chun-Hao

    2016-08-01

    There is still debate on whether high uric acid increases bone mineral density (BMD) against osteoporotic fracture or bone resorption caused by gout inflammation. This study aimed to evaluate whether gout offers a protective effect on bone health or not. We conducted a nationwide population-based retrospective cohort study to evaluate the association between gout history and risk factors of fracture.A retrospective cohort study was designed using the claim data from Longitudinal Health Insurance Database (LHID). A total of 43,647 subjects with gout and a cohort of 87,294 comparison subjects without gout were matched in terms of age and sex between 2001 and 2009, and the data were followed until December 31, 2011. The primary outcome of the study was the fracture incidence, and the impacts of gout on fracture risks were analyzed using the Cox proportional hazards model.After an 11-year follow-up period, 6992 and 11,412 incidents of fracture were reported in gout and comparison cohorts, respectively. The overall incidence rate of fracture in individuals with gout was nearly 23%, which was higher than that in individuals without gout (252 vs 205 per 10,000 person-years) at an adjusted hazard ratio of 1.17 (95% confidence interval = 1.14-1.21). Age, sex, and fracture-associated comorbidities were adjusted accordingly. As for fracture locations, patients with gout were found at significant higher fracture risks for upper/lower limbs and spine fractures. In gout patient, the user of allopurinol or benzbromarone has significantly lower risk of facture than nonusers.Gout history is considered as a risk factor for fractures, particularly in female individuals and fracture sites located at the spine or upper/lower limbs. PMID:27559970

  9. Increased Risk of Osteoporosis in Patients With Peptic Ulcer Disease: A Nationwide Population-Based Study.

    PubMed

    Wu, Chieh-Hsin; Tung, Yi-Ching; Chai, Chee-Yin; Lu, Ying-Yi; Su, Yu-Feng; Tsai, Tai-Hsin; Kuo, Keng-Liang; Lin, Chih-Lung

    2016-04-01

    To investigate osteoporosis risk in patients with peptic ulcer disease (PUD) using a nationwide population-based dataset. This Taiwan National Health Insurance Research Database (NHIRD) analysis included 27,132 patients aged 18 years and older who had been diagnosed with PUD (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] codes 531-534) during 1996 to 2010. The control group consisted of 27,132 randomly selected (age- and gender)-matched patients without PUD. The association between PUD and the risk of developing osteoporosis was estimated using a Cox proportional hazard regression model. During the follow-up period, osteoporosis was diagnosed in 2538 (9.35 %) patients in the PUD group and in 2259 (8.33 %) participants in the non-PUD group. After adjusting for covariates, osteoporosis risk was 1.85 times greater in the PUD group compared to the non-PUD group (13.99 vs 5.80 per 1000 person-years, respectively). Osteoporosis developed 1 year after PUD diagnosis. The 1-year follow-up period exhibited the highest significance between the 2 groups (hazard ratio [HR] = 63.44, 95% confidence interval [CI] = 28.19-142.74, P < 0.001). Osteoporosis risk was significantly higher in PUD patients with proton-pump-inhibitors (PPIs) use (HR = 1.17, 95% CI = 1.03-1.34) compared to PUD patients without PPIs use. This study revealed a significant association between PUD and subsequent risk of osteoporosis. Therefore, PUD patients, especially those treated with PPIs, should be evaluated for subsequent risk of osteoporosis to minimize the occurrence of adverse events.

  10. The Risk of Chronic Pancreatitis in Patients with Psoriasis: A Population-Based Cohort Study

    PubMed Central

    Chiang, Yi-Ting; Huang, Weng-Foung; Tsai, Tsen-Fang

    2016-01-01

    Background Psoriasis is a chronic systemic inflammatory disorder, and studies have revealed its association with a variety of comorbidities. However, the risk of chronic pancreatitis (CP) in psoriasis has not been studied. This study aimed to investigate the risk of CP among patients with psoriasis. Methods Using the Taiwan National Health Insurance Research Database, this population-based cohort study enrolled 48430 patients with psoriasis and 193720 subjects without psoriasis. Stratified Cox proportional hazards models were used to compare the risks of CP between the patients with and without psoriasis. Results The incidence of CP was 0.61 per 1000 person-years in patients with psoriasis and 0.34 per 1000 person-years in controls during a mean 6.6-year follow-up period. Before adjustment, patients with psoriasis had a significantly higher risk of CP (crude hazard ratio (HR) = 1.81; 95% confidence interval (CI) = 1.53–2.15), and the risk remained significantly higher after adjustments for gender, age group, medications, and comorbidities (adjusted HR (aHR) = 1.76; 95% CI = 1.47–2.10). All psoriasis patient subgroups other than those with arthritis, including those with mild and severe psoriasis and those without arthritis, had significantly increased aHRs for CP, and the risk increased with increasing psoriasis severity. Psoriasis patients taking nonsteroidal anti-inflammatory drugs (aHR = 0.33; 95% CI = 0.22–0.49) and methotrexate (aHR = 0.28; 95% CI = 0.12–0.64) had a lower risk of developing CP after adjustments. Conclusions Psoriasis is associated with a significantly increased risk of CP. The results of our study call for more research to provide additional insight into the relationship between psoriasis and CP. PMID:27467265

  11. Spontaneous Abortion, Stillbirth and Hyperthyroidism: A Danish Population-Based Study

    PubMed Central

    Andersen, Stine Linding; Olsen, Jørn; Wu, Chun Sen; Laurberg, Peter

    2014-01-01

    Objectives Pregnancy loss in women suffering from hyperthyroidism has been described in case reports, but the risk of pregnancy loss caused by maternal hyperthyroidism in a population is unknown. We aimed to evaluate the association between maternal hyperthyroidism and pregnancy loss in a population-based cohort study. Study Design All pregnancies in Denmark from 1997 to 2008 leading to hospital visits (n = 1,062,862) were identified in nationwide registers together with information on maternal hyperthyroidism for up to 2 years after the pregnancy [hospital diagnosis/prescription of antithyroid drug (ATD)]. The Cox proportional hazards model was used to estimate adjusted hazard ratio (aHR) with 95% confidence interval (CI) for spontaneous abortion (gestational age <22 weeks) and stillbirth (≥22 weeks), reference: no maternal thyroid dysfunction. Results When maternal hyperthyroidism was diagnosed before/during the pregnancy (n = 5,229), spontaneous abortion occurred more often both in women treated before the pregnancy alone [aHR 1.28 (95% CI 1.18-1.40)] and in women treated with ATD in early pregnancy [1.18 (1.07-1.31)]. When maternal hyperthyroidism was diagnosed and treated for the first time in the 2-year period after the pregnancy (n = 2,361), there was a high risk that the pregnancy under study had terminated with a stillbirth [2.12 (1.30-3.47)]. Conclusions Both early (spontaneous abortion) and late (stillbirth) pregnancy loss were more common in women suffering from hyperthyroidism. Inadequately treated hyperthyroidism in early pregnancy may have been involved in spontaneous abortion, and undetected high maternal thyroid hormone levels present in late pregnancy may have attributed to an increased risk of stillbirth. PMID:25538898

  12. Ethnic Differences in Gestational Weight Gain: A Population-Based Cohort Study in Norway.

    PubMed

    Kinnunen, Tarja I; Waage, Christin W; Sommer, Christine; Sletner, Line; Raitanen, Jani; Jenum, Anne Karen

    2016-07-01

    Objectives To explore ethnic differences in gestational weight gain (GWG). Methods This was a population-based cohort study conducted in primary care child health clinics in Groruddalen, Oslo, Norway. Participants were healthy pregnant women (n = 632) categorised to six ethnic groups (43 % were Western European women, the reference group). Body weight was measured at 15 and 28 weeks' gestation on average. Data on pre-pregnancy weight and total GWG until delivery were self-reported. The main method of analysis was linear regression adjusting for age, weeks' gestation, pre-pregnancy body mass index, education and severe nausea. Results No ethnic differences were observed in GWG by 15 weeks' gestation. By 28 weeks' gestation, Eastern European women had gained 2.71 kg (95 % confidence interval, CI 1.10-4.33) and Middle Eastern women 1.32 kg (95 % CI 0.14-2.50) more weight on average than the Western European women in the fully adjusted model. Among Eastern European women, the total adjusted GWG was 3.47 kg (95 % CI 1.33-5.61) above the reference group. Other ethnic groups (South Asian, East Asian and African) did not differ from the reference group. When including non-smokers (n = 522) only, observed between-group differences increased and Middle Eastern women gained more weight than the reference group by all time points. Conclusions Eastern European and Middle Eastern women had higher GWG on average than Western European women, especially among the non-smokers. Although prevention of excessive GWG is important for all pregnant women, these ethnic groups might need special attention during pregnancy.

  13. The Prevalence of Peyronie's Disease in the United States: A Population-Based Study

    PubMed Central

    Stuntz, Mark; Perlaky, Anna; des Vignes, Franka; Kyriakides, Tassos; Glass, Dan

    2016-01-01

    Peyronie’s disease (PD) is a connective tissue disorder which can result in penile deformity. The prevalence of diagnosed PD in the United States (US) has been estimated to be 0.5% in adult males, but there is limited additional information comparing definitive and probable PD cases. We conducted a population-based survey to assess PD prevalence using a convenience-sample of adult men participating in the ResearchNow general population panel. Respondents were categorized according to PD status (definitive, probable, no PD) and segmented by US geographic region, education, and income levels. Of the 7,711 respondents, 57 (0.7%) had definitive PD while 850 (11.0%) had probable PD. Using univariate logistic regression modeling, older age (18–24 vs 24+) (OR = 0.721; 95% CI = 0.570,0.913), Midwest/Northeast/West geographic region (South vs Midwest/Northeast/West) (OR = 0.747; 95% CI = 0.646,0.864), and higher income level (<25K vs 25K+) (OR = 0.820; 95% CI = 0.673,0.997) were each significantly associated with reduced odds of having a definitive/probable PD diagnosis compared with no PD diagnosis. When all three variables were entered in a stepwise multivariable logistic regression, only age (OR = 0.642; 95% CI = 0.497, 0.828) and region (OR = 0.752; 95% CI = 0.647, 0.872) remained significant. This study is the first to report PD prevalence by geographic region and income, and it advocates that the prevalence of PD in the US may be higher than previously cited. Further, given the large discrepancy between definitive PD cases diagnosed by a physician and probable cases not diagnosed by a physician, much more needs to be done to raise awareness of this disease. PMID:26907743

  14. The Association Between Peptic Ulcer Disease and Ischemic Stroke: A Population-Based Longitudinal Study.

    PubMed

    Cheng, Tain-Junn; Guo, How-Ran; Chang, Chia-Yu; Weng, Shih-Feng; Li, Pi-I; Wang, Jhi-Joung; Wu, Wen-Shiann

    2016-05-01

    Stroke is a common cause of death worldwide, but about 30% of ischemic stroke (IS) patients have no identifiable contributing risk factors. Because peptic ulcer disease (PUD) and vascular events share some common risk factors, we conducted a population-based study to evaluate the association between PUD and IS.We followed up a representative sample of 1 million residents of Taiwan using the National Health Insurance Research Database from 1997 to 2011. We defined patients who received medications for PUD and had related diagnosis codes as the PUD group, and a reference group matched by age and sex was sampled from those who did not have PUD. We also collected data on medical history and monthly income. The events of IS occurred after enrollment were compared between the 2 groups. The data were analyzed using Cox proportional hazard models at the 2-tailed significant level of 0.05.The PUD group had higher income and prevalence of hypertension, diabetes mellitus (DM), heart disease, and hyperlipidemia. They also had a higher risk of developing IS with an adjusted hazard ratio of 1.31 (95% confidence interval: 1.20-1.41). Other independent risk factors included male sex, older age, lower income, and co-morbidity of hypertension, diabetes mellitus (DM), and heart disease.PUD is a risk factor for IS, independent of conventional risk factors such as male sex, older age, lower income, and co-morbidity of hypertension, DM, and heart disease. Prevention strategies taking into account PUD should be developed and evaluated. PMID:27258514

  15. Antithyroid drug-related hepatotoxicity in hyperthyroidism patients: a population-based cohort study

    PubMed Central

    Wang, Meng-Ting; Lee, Wan-Ju; Huang, Tien-Yu; Chu, Che-Li; Hsieh, Chang-Hsun

    2014-01-01

    Aims The evidence of hepatotoxicity of antithyroid drugs (ATDs) is limited to case reports or spontaneous reporting. This study aimed to quantify the incidence and comparative risks of hepatotoxicity for methimazole (MMI)/carbimazole (CBM) vs. propylthiouracil (PTU) in a population-based manner. Methods We conducted a cohort study of hyperthyroidism patients initially receiving MMI/CBM or PTU between 1 January 2004 and 31 December 2008 using the Taiwan National Health Insurance Research Database. The examined hepatotoxicity consisted of cholestasis, non-infectious hepatitis, acute liver failure and liver transplant, with the incidences and relative risks being quantified by Poisson exact methods and Cox proportional hazard models, respectively. Results The study cohort comprised 71 379 ATD initiators, with a median follow-up of 196 days. MMI/CBM vs. PTU users had a higher hepatitis incidence rate (3.17/1000 vs. 1.19/1000 person-years) but a lower incidence of acute liver failure (0.32/1000 vs. 0.68/1000 person-years). The relative risk analysis indicated that any use of MMI/CBM was associated with a 2.89-fold (95% CI 1.81, 4.60) increased hepatitis risk compared with PTU, with the risk increasing to 5.08-fold for high dose MMI/CBM (95% CI 3.15, 8.18). However, any MMI/CBM use vs. PTU was not related to an increased risk of cholestasis (adjusted hazard ratio [HR] 1.14, 95% CI 0.40, 3.72) or acute liver failure (adjusted HR 0.54, 95% CI 0.24, 1.22). Conclusions MMI/CBM and PTU exert dissimilar incidence rates of hepatotoxicity. Compared to PTU, MMI/CBM are associated in a dose-dependent manner with an increased risk for hepatitis while the risks are similar for acute liver failure and cholestasis. PMID:25279406

  16. Effect of radical prostatectomy surgeon volume on complication rates from a large population-based cohort

    PubMed Central

    Almatar, Ashraf; Wallis, Christopher J.D.; Herschorn, Sender; Saskin, Refik; Kulkarni, Girish S.; Kodama, Ronald T.; Nam, Robert K.

    2016-01-01

    Introduction: Surgical volume can affect several outcomes following radical prostatectomy (RP). We examined if surgical volume was associated with novel categories of treatment-related complications following RP. Methods: We examined a population-based cohort of men treated with RP in Ontario, Canada between 2002 and 2009. We used Cox proportional hazard modeling to examine the effect of physician, hospital and patient demographic factors on rates of treatment-related hospital admissions, urologic procedures, and open surgeries. Results: Over the study interval, 15 870 men were treated with RP. A total of 196 surgeons performed a median of 15 cases per year (range: 1–131). Patients treated by surgeons in the highest quartile of annual case volume (>39/year) had a lower risk of hospital admission (hazard ratio [HR]=0.54, 95% CI 0.47–0.61) and urologic procedures (HR=0.69, 95% CI 0.64–0.75), but not open surgeries (HR=0.83, 95% CI 0.47–1.45) than patients treated by surgeons in the lowest quartile (<15/year). Treatment at an academic hospital was associated with a decreased risk of hospitalization (HR=0.75, 95% CI 0.67–0.83), but not of urologic procedures (HR=0.94, 95% CI 0.88–1.01) or open surgeries (HR=0.87, 95% CI 0.54–1.39). There was no significant trend in any of the outcomes by population density. Conclusions: The annual case volume of the treating surgeon significantly affects a patient’s risk of requiring hospitalization or urologic procedures (excluding open surgeries) to manage treatment-related complications. PMID:26977206

  17. Childhood ADHD and Risk for Substance Dependence in Adulthood: A Longitudinal, Population-Based Study

    PubMed Central

    Levy, Sharon; Katusic, Slavica K.; Colligan, Robert C.; Weaver, Amy L.; Killian, Jill M.; Voigt, Robert G.; Barbaresi, William J.

    2014-01-01

    Background Adolescents with attention-deficit/hyperactivity disorder (ADHD) are known to be at significantly greater risk for the development of substance use disorders (SUD) compared to peers. Impulsivity, which could lead to higher levels of drug use, is a known symptom of ADHD and likely accounts, in part, for this relationship. Other factors, such as a biologically increased susceptibility to substance dependence (addiction), may also play a role. Objective This report further examines the relationships between childhood ADHD, adolescent- onset SUD, and substance abuse and substance dependence in adulthood. Method Individuals with childhood ADHD and non-ADHD controls from the same population-based birth cohort were invited to participate in a prospective outcome study. Participants completed a structured neuropsychiatric interview with modules for SUD and a psychosocial questionnaire. Information on adolescent SUD was obtained retrospectively, in a previous study, from medical and school records. Associations were summarized using odds ratios (OR) and 95% CIs estimated from logistic regression models adjusted for age and gender. Results A total of 232 ADHD cases and 335 non-ADHD controls participated (mean age, 27.0 and 28.6 years, respectively). ADHD cases were more likely than controls to have a SUD diagnosed in adolescence and were more likely to have alcohol (adjusted OR 14.38, 95% CI 1.49–138.88) and drug (adjusted OR 3.48, 95% CI 1.38–8.79) dependence in adulthood. The subgroup of participating ADHD cases who did not have SUD during adolescence were no more likely than controls to develop new onset alcohol dependence as adults, although they were significantly more likely to develop new onset drug dependence. Conclusions Our study found preliminary evidence that adults with childhood ADHD are more susceptible than peers to developing drug dependence, a disorder associated with neurological changes in the brain. The relationship between ADHD and

  18. Sex differences in the outcomes of peripheral arterial disease: a population-based cohort study

    PubMed Central

    Hussain, Mohamad A.; Lindsay, Thomas F.; Mamdani, Muhammad; Wang, Xuesong; Verma, Subodh; Al-Omran, Mohammed

    2016-01-01

    Background: The role of sex in the outcomes of patients with peripheral arterial disease (PAD) has been poorly studied. We sought to investigate differences in the long-term adverse cardiovascular and limb outcomes between men and women with PAD. Methods: We conducted a population-based cohort study with up to 7 years of follow-up using linked administrative databases in Ontario, Canada. Patients aged 40 years or older who visited a vascular surgeon between Apr. 1, 2004, and Mar. 31, 2007 (index date), and carried a diagnosis of PAD comprised the study cohort. The primary outcome was a composite of death or hospital admission for stroke or myocardial infarction. Secondary outcomes included lower limb amputation or revascularization. We used Cox proportional hazards modelling to compute unadjusted hazard ratios (HRs) and HRs adjusted for baseline covariates. Results: A total of 6915 patients were studied, of whom 2461 (35.6%) were women. No significant differences in the risk of the primary outcome were observed between men and women (adjusted HR 0.99 [95% confidence interval (CI) 0.92-1.05]). Women were less likely than men to undergo minor amputation (adjusted HR 0.73 [95% CI 0.62-0.85]) and arterial bypass surgery (adjusted HR 0.82 [95% CI 0.71-0.94]) but were more likely to be admitted to hospital for acute myocardial infarction (adjusted HR 1.15 [95% CI 1.00-1.31]). There were no sex differences in the rates of major amputation or transluminal percutaneous angioplasty. Interpretation: We identified no significant differences in the composite risk of major adverse cardiovascular events between women and men with PAD, although our findings suggest men may be at increased risk for adverse limb events compared with women. Cardiovascular health campaigns should focus on both women and men to promote early diagnosis and management of PAD. PMID:27280110

  19. Metformin use and survival after colorectal cancer: A population-based cohort study.

    PubMed

    Mc Menamin, Úna C; Murray, Liam J; Hughes, Carmel M; Cardwell, Chris R

    2016-01-15

    Preclinical evidence suggests that metformin could delay cancer progression. Previous epidemiological studies however have been limited by small sample sizes and certain time-related biases. This study aimed to investigate whether colorectal cancer patients with type 2 diabetes who were exposed to metformin had reduced cancer-specific mortality. We conducted a retrospective cohort study of 1,197 colorectal cancer patients newly diagnosed from 1998 to 2009 (identified from English cancer registries) with type 2 diabetes (based upon Clinical Practice Research Datalink, CPRD, prescription and diagnosis records). In this cohort 382 colorectal cancer-specific deaths occurred up to 2012 from the Office of National Statistics (ONS) mortality data. Metformin use was identified from CPRD prescription records. Using time-dependent Cox regression models, unadjusted and adjusted hazard ratios (HR) and 95% CIs were calculated for the association between post-diagnostic exposure to metformin and colorectal cancer-specific mortality. Overall, there was no evidence of an association between metformin use and cancer-specific death before or after adjustment for potential confounders (adjusted HR 1.06, 95% CI 0.80, 1.40). In addition, after adjustment for confounders, there was also no evidence of associations between other diabetic medications and cancer-specific mortality including sulfonylureas (HR 1.14, 95% CI 0.86, 1.51), insulin use (HR 1.35, 95% CI 0.95, 1.93) or other anti-diabetic medications including thiazolidinediones (HR 0.73, 95% CI 0.46, 1.14). Similar associations were observed by duration of use and for all-cause mortality. This population-based study, the largest to date, does not support a protective association between metformin and survival in colorectal cancer patients.

  20. Aspirin Use Associated With Amyotrophic Lateral Sclerosis: a Total Population-Based Case-Control Study

    PubMed Central

    Tsai, Ching-Piao; Lin, Feng-Cheng; Lee, Johnny Kuang-Wu; Lee, Charles Tzu-Chi

    2015-01-01

    Background The association of aspirin use and nonsteroid anti-inflammatory drug (NSAID) use with amyotrophic lateral sclerosis (ALS) risk is unclear. This study determined whether use of any individual compound is associated with ALS risk by conducting a total population-based case-control study in Taiwan. Methods A total of 729 patients with newly diagnosed ALS who had a severely disabling disease certificate between January 1, 2002, and December 1, 2008, comprised the case group. These cases were compared with 7290 sex-, age-, residence-, and insurance premium-matched controls. Drug use by each Anatomical Therapeutic Chemical code was analyzed using conditional logistic regression models. False discovery rate (FDR)-adjusted P values were reported in order to avoid inflating false positives. Results Of the 1336 compounds, only the 266 with use cases exceeding 30 in our database were included in the screening analysis. Without controlling for steroid use, the analysis failed to reveal any compound that was inversely associated with ALS risk according to FDR criteria. After controlling for steroid use, we found use of the following compounds to be associated with ALS risk: aspirin, diphenhydramine (one of the antihistamines), and mefenamic acid (one of the NSAIDs). A multivariate analysis revealed that aspirin was independently inversely associated with ALS risk after controlling for diphenhydramine, mefenamic acid, and steroid use. The inverse association between aspirin and ALS was present predominately in patients older than 55 years. Conclusions The results of this study suggested that aspirin use might reduce the risk of ALS, and the benefit might be more prominent for older people. PMID:25721071

  1. Genocide Exposure and Subsequent Suicide Risk: A Population-Based Study.

    PubMed

    Levine, Stephen Z; Levav, Itzhak; Yoffe, Rinat; Becher, Yifat; Pugachova, Inna

    2016-01-01

    The association between periods of genocide-related exposures and suicide risk remains unknown. Our study tests that association using a national population-based study design. The source population comprised of all persons born during1922-1945 in Nazi-occupied or dominated European nations, that immigrated to Israel by 1965, were identified in the Population Register (N = 220,665), and followed up for suicide to 2014, totaling 16,953,602 person-years. The population was disaggregated to compare a trauma gradient among groups that immigrated before (indirect, n = 20,612, 9%); during (partial direct, n = 17,037, 8%); or after (full direct, n = 183,016, 83%) exposure to the Nazi era. Also, the direct exposure groups were examined regarding pre- or post-natal exposure periods. Cox regression models were used to compute Hazard Ratios (HR) of suicide risk to compare the exposure groups, adjusting for confounding by gender, residential SES and history of psychiatric hospitalization. In the total population, only the partial direct exposure subgroup was at greater risk compared to the indirect exposure group (HR = 1.73, 95% CI, 1.10, 2.73; P < .05). That effect replicated in six sensitivity analyses. In addition, sensitivity analyses showed that exposure at ages 13 plus among females, and follow-up by years since immigration were associated with a greater risk; whereas in utero exposure among persons with no psychiatric hospitalization and early postnatal exposure among males were at a reduced risk. Tentative mechanisms impute biopsychosocial vulnerability and natural selection during early critical periods among males, and feelings of guilt and entrapment or defeat among females. PMID:26901411

  2. Early Cognitive Deficits in Type 2 Diabetes: A Population-Based Study

    PubMed Central

    Marseglia, Anna; Fratiglioni, Laura; Laukka, Erika J.; Santoni, Giola; Pedersen, Nancy L.; Bäckman, Lars; Xu, Weili

    2016-01-01

    Evidence links type 2 diabetes to dementia risk. However, our knowledge on the initial cognitive deficits in diabetic individuals and the factors that might promote such deficits is still limited. This study aimed to identify the cognitive domains initially impaired by diabetes and the factors that play a role in this first stage. Within the population-based Swedish National Study on Aging and Care–Kungsholmen, 2305 cognitively intact participants aged ≥60 y were identified. Attention/working memory, perceptual speed, category fluency, letter fluency, semantic memory, and episodic memory were assessed. Diabetes (controlled and uncontrolled) and prediabetes were ascertained by clinicians, who also collected information on vascular disorders (hypertension, heart diseases, and stroke) and vascular risk factors (VRFs, including smoking and overweight/obesity). Data were analyzed with linear regression models. Overall, 196 participants (8.5%) had diabetes, of which 144 (73.5%) had elevated glycaemia (uncontrolled diabetes); 571 (24.8%) persons had prediabetes. In addition, diabetes, mainly uncontrolled, was related to lower performance in perceptual speed (β – 1.10 [95% CI – 1.98, – 0.23]), category fluency (β – 1.27 [95% CI – 2.52, – 0.03]), and digit span forward (β – 0.35 [95% CI – 0.54, – 0.17]). Critically, these associations were present only among APOE ɛ4 non–carriers. The associations of diabetes with perceptual speed and category fluency were present only among participants with VRFs or vascular disorders. Diabetes, especially uncontrolled diabetes, is associated with poorer performance in perceptual speed, category fluency, and attention/primary memory. VRFs, vascular disorders, and APOE status play a role in these associations. PMID:27314527

  3. Psychotropic drugs and the risk of fractures in old age: a prospective population-based study

    PubMed Central

    2010-01-01

    Background There is evidence that the use of any psychotropic and the concomitant use of two or more benzodiazepines are related to an increased risk of fractures in old age. However, also controversial results exist. The aim was to describe associations between the use of a psychotropic drug, or the concomitant use of two or more of these drugs and the risk of fractures in a population aged 65 years or over. Methods This study was a part of a prospective longitudinal population-based study carried out in the municipality of Lieto, South-Western Finland. The objective was to describe gender-specific associations between the use of one psychotropic drug [benzodiazepine (BZD), antipsychotic (AP) or antidepressant (AD)] or the concomitant use of two or more psychotropic drugs and the risk of fractures in a population 65 years or over. Subjects were participants in the first wave of the Lieto study in 1990-1991, and they were followed up until the end of 1996. Information about fractures confirmed with radiology reports in 1,177 subjects (482 men and 695 women) during the follow-up was collected from medical records. Two follow-up periods (three and six years) were used, and previously found risk factors of f