Threshold models in radiation carcinogenesis
Hoel, D.G.; Li, P.
1998-09-01
Cancer incidence and mortality data from the atomic bomb survivors cohort has been analyzed to allow for the possibility of a threshold dose response. The same dose-response models as used in the original papers were fit to the data. The estimated cancer incidence from the fitted models over-predicted the observed cancer incidence in the lowest exposure group. This is consistent with a threshold or nonlinear dose-response at low-doses. Thresholds were added to the dose-response models and the range of possible thresholds is shown for both solid tumor cancers as well as the different leukemia types. This analysis suggests that the A-bomb cancer incidence data agree more with a threshold or nonlinear dose-response model than a purely linear model although the linear model is statistically equivalent. This observation is not found with the mortality data. For both the incidence data and the mortality data the addition of a threshold term significantly improves the fit to the linear or linear-quadratic dose response for both total leukemias and also for the leukemia subtypes of ALL, AML, and CML.
Universal Screening for Emotional and Behavioral Problems: Fitting a Population-Based Model
ERIC Educational Resources Information Center
Schanding, G. Thomas, Jr.; Nowell, Kerri P.
2013-01-01
Schools have begun to adopt a population-based method to conceptualizing assessment and intervention of students; however, little empirical evidence has been gathered to support this shift in service delivery. The present study examined the fit of a population-based model in identifying students' behavioral and emotional functioning using a…
POPULATION-BASED EXPOSURE MODELING FOR AIR POLLUTANTS AT EPA'S NATIONAL EXPOSURE RESEARCH LABORATORY
The US EPA's National Exposure Research Laboratory (NERL) has been developing, applying, and evaluating population-based exposure models to improve our understanding of the variability in personal exposure to air pollutants. Estimates of population variability are needed for E...
Population based models of cortical drug response: insights from anaesthesia
Bojak, Ingo; Liley, David T. J.
2008-01-01
A great explanatory gap lies between the molecular pharmacology of psychoactive agents and the neurophysiological changes they induce, as recorded by neuroimaging modalities. Causally relating the cellular actions of psychoactive compounds to their influence on population activity is experimentally challenging. Recent developments in the dynamical modelling of neural tissue have attempted to span this explanatory gap between microscopic targets and their macroscopic neurophysiological effects via a range of biologically plausible dynamical models of cortical tissue. Such theoretical models allow exploration of neural dynamics, in particular their modification by drug action. The ability to theoretically bridge scales is due to a biologically plausible averaging of cortical tissue properties. In the resulting macroscopic neural field, individual neurons need not be explicitly represented (as in neural networks). The following paper aims to provide a non-technical introduction to the mean field population modelling of drug action and its recent successes in modelling anaesthesia. PMID:19003456
Validation of population-based disease simulation models: a review of concepts and methods
2010-01-01
Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466
Models of population-based analyses for data collected from large extended families
Lee, Elisa T.; Howard, Barbara V.; Fabsitz, Richard R.; Devereux, Richard B.; MacCluer, Jean W.; Laston, Sandra; Comuzzie, Anthony G.; Shara, Nawar M.; Welty, Thomas K.
2014-01-01
Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim. PMID:20882324
Error Threshold of Fully Random Eigen Model
NASA Astrophysics Data System (ADS)
Li, Duo-Fang; Cao, Tian-Guang; Geng, Jin-Peng; Qiao, Li-Hua; Gu, Jian-Zhong; Zhan, Yong
2015-01-01
Species evolution is essentially a random process of interaction between biological populations and their environments. As a result, some physical parameters in evolution models are subject to statistical fluctuations. In this work, two important parameters in the Eigen model, the fitness and mutation rate, are treated as Gaussian distributed random variables simultaneously to examine the property of the error threshold. Numerical simulation results show that the error threshold in the fully random model appears as a crossover region instead of a phase transition point, and as the fluctuation strength increases the crossover region becomes smoother and smoother. Furthermore, it is shown that the randomization of the mutation rate plays a dominant role in changing the error threshold in the fully random model, which is consistent with the existing experimental data. The implication of the threshold change due to the randomization for antiviral strategies is discussed.
Scalable Entity-Based Modeling of Population-Based Systems, Final LDRD Report
Cleary, A J; Smith, S G; Vassilevska, T K; Jefferson, D R
2005-01-27
The goal of this project has been to develop tools, capabilities and expertise in the modeling of complex population-based systems via scalable entity-based modeling (EBM). Our initial focal application domain has been the dynamics of large populations exposed to disease-causing agents, a topic of interest to the Department of Homeland Security in the context of bioterrorism. In the academic community, discrete simulation technology based on individual entities has shown initial success, but the technology has not been scaled to the problem sizes or computational resources of LLNL. Our developmental emphasis has been on the extension of this technology to parallel computers and maturation of the technology from an academic to a lab setting.
Comprehensive, Population-Based Sensitivity Analysis of a Two-Mass Vocal Fold Model.
Robertson, Daniel; Zañartu, Matías; Cook, Douglas
2016-01-01
Previous vocal fold modeling studies have generally focused on generating detailed data regarding a narrow subset of possible model configurations. These studies can be interpreted to be the investigation of a single subject under one or more vocal conditions. In this study, a broad population-based sensitivity analysis is employed to examine the behavior of a virtual population of subjects and to identify trends between virtual individuals as opposed to investigating a single subject or model instance. Four different sensitivity analysis techniques were used in accomplishing this task. Influential relationships between model input parameters and model outputs were identified, and an exploration of the model's parameter space was conducted. Results indicate that the behavior of the selected two-mass model is largely dominated by complex interactions, and that few input-output pairs have a consistent effect on the model. Results from the analysis can be used to increase the efficiency of optimization routines of reduced-order models used to investigate voice abnormalities. Results also demonstrate the types of challenges and difficulties to be expected when applying sensitivity analyses to more complex vocal fold models. Such challenges are discussed and recommendations are made for future studies. PMID:26845452
Comprehensive, Population-Based Sensitivity Analysis of a Two-Mass Vocal Fold Model
Robertson, Daniel; Zañartu, Matías; Cook, Douglas
2016-01-01
Previous vocal fold modeling studies have generally focused on generating detailed data regarding a narrow subset of possible model configurations. These studies can be interpreted to be the investigation of a single subject under one or more vocal conditions. In this study, a broad population-based sensitivity analysis is employed to examine the behavior of a virtual population of subjects and to identify trends between virtual individuals as opposed to investigating a single subject or model instance. Four different sensitivity analysis techniques were used in accomplishing this task. Influential relationships between model input parameters and model outputs were identified, and an exploration of the model’s parameter space was conducted. Results indicate that the behavior of the selected two-mass model is largely dominated by complex interactions, and that few input-output pairs have a consistent effect on the model. Results from the analysis can be used to increase the efficiency of optimization routines of reduced-order models used to investigate voice abnormalities. Results also demonstrate the types of challenges and difficulties to be expected when applying sensitivity analyses to more complex vocal fold models. Such challenges are discussed and recommendations are made for future studies. PMID:26845452
Differential equation models for sharp threshold dynamics.
Schramm, Harrison C; Dimitrov, Nedialko B
2014-01-01
We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. PMID:24184349
Toxicogenetics: population-based testing of drug and chemical safety in mouse models.
Rusyn, Ivan; Gatti, Daniel M; Wiltshire, Timothy; Wilshire, Timothy; Kleeberger, Steven R; Threadgill, David W
2010-08-01
The rapid decline in the cost of dense genotyping is paving the way for new DNA sequence-based laboratory tests to move quickly into clinical practice, and to ultimately help realize the promise of 'personalized' therapies. These advances are based on the growing appreciation of genetics as an important dimension in science and the practice of investigative pharmacology and toxicology. On the clinical side, both the regulators and the pharmaceutical industry hope that the early identification of individuals prone to adverse drug effects will keep advantageous medicines on the market for the benefit of the vast majority of prospective patients. On the environmental health protection side, there is a clear need for better science to define the range and causes of susceptibility to adverse effects of chemicals in the population, so that the appropriate regulatory limits are established. In both cases, most of the research effort is focused on genome-wide association studies in humans where de novo genotyping of each subject is required. At the same time, the power of population-based preclinical safety testing in rodent models (e.g., mouse) remains to be fully exploited. Here, we highlight the approaches available to utilize the knowledge of DNA sequence and genetic diversity of the mouse as a species in mechanistic toxicology research. We posit that appropriate genetically defined mouse models may be combined with the limited data from human studies to not only discover the genetic determinants of susceptibility, but to also understand the molecular underpinnings of toxicity. PMID:20704464
Jones, Morgan H.; Walia, Piyush; Fening, Stephen D.; Miniaci, Anthony
2016-01-01
computed as a ratio of horizontal reaction force to the compressive load. Results: The individual specimen-specific model results comparison to the experimental data for %IT had a good agreement as the values were similar for defect created. However, results for SR were over predicted by the FE model, but they had similar linear decreasing trends for both specimen-specific and cadaveric model. In addition, the humeral head defect size of 44% reduced the %IT from 100% to nearly 0% for all three models. The results for the comparison of all three models with increasing size of humeral defect with a 20% glenoid defect are shown in Figure 1 at three arm position. Conclusion: This study proposed a simple population-based model that can be used to estimate the loss in stability due to combined defects to determine a threshold for defect augmentation in clinical practice. It was demonstrated that a smaller glenoid defect size of 10% combined with a 19% humeral head defect can cause significant instability. Similar to past studies, it was also shown that a glenoid defect would lead to loss of translation and a humeral head defect would lead to instability at a functional arm position of increased abduction and external rotation [5-6]. All three models predicted similar results during validation, which shows that the population based model can be utilized to estimate the stability, instead of needing patient-specific FE models. The limitation of the study is the absence of soft tissue restraints.
Threshold modeling of extreme spatial rainfall
NASA Astrophysics Data System (ADS)
Thibaud, E.; Davison, A.
2013-12-01
Complex events such as sustained extreme precipitation have major effects on human populations and environmental sustainability, and there is a growing interest in modeling them realistically. For risk assessment based on spatial quantities such as the total amount of rainfall falling over a region, it is necessary to properly model the dependence among extremes over that region, based on data from perhaps only a few sites within it. We propose an approach to spatial modeling of extreme rainfall, based on max-stable processes fitted using partial duration series and a censored threshold likelihood function. The resulting models are coherent with classical extreme-value theory and allow the consistent treatment of spatial dependence of rainfall using ideas related to those of classical geostatistics. The method can be used to produce simulations needed for hydrological models, and in particular for the generation of spatially heterogeneous extreme rainfall fields over catchments. We illustrate the ideas through data from the Val Ferret watershed in the Swiss Alps, based on daily cumulative rainfall totals recorded at 24 stations for four summers, augmented by a longer series from nearby. References: Davison, A. C., Huser, R., Thibaud, E. (2013). Geostatistics of Dependent and Asymptotically Independent Extremes, Mathematical Geosciences, vol. 45, num. 5, p. 511-529, 2013, doi:10.1007/s11004-013-9469-y Thibaud, E., Mutzner, R., Davison A. C. (2013, to appear). Threshold modeling of extreme spatial rainfall, Water Resources Research, doi:10.1002/wrcr.20329
Neural Field Models with Threshold Noise.
Thul, Rüdiger; Coombes, Stephen; Laing, Carlo R
2016-12-01
The original neural field model of Wilson and Cowan is often interpreted as the averaged behaviour of a network of switch like neural elements with a distribution of switch thresholds, giving rise to the classic sigmoidal population firing-rate function so prevalent in large scale neuronal modelling. In this paper we explore the effects of such threshold noise without recourse to averaging and show that spatial correlations can have a strong effect on the behaviour of waves and patterns in continuum models. Moreover, for a prescribed spatial covariance function we explore the differences in behaviour that can emerge when the underlying stationary distribution is changed from Gaussian to non-Gaussian. For travelling front solutions, in a system with exponentially decaying spatial interactions, we make use of an interface approach to calculate the instantaneous wave speed analytically as a series expansion in the noise strength. From this we find that, for weak noise, the spatially averaged speed depends only on the choice of covariance function and not on the shape of the stationary distribution. For a system with a Mexican-hat spatial connectivity we further find that noise can induce localised bump solutions, and using an interface stability argument show that there can be multiple stable solution branches. PMID:26936267
A threshold model of investor psychology
NASA Astrophysics Data System (ADS)
Cross, Rod; Grinfeld, Michael; Lamba, Harbir; Seaman, Tim
2005-08-01
We introduce a class of agent-based market models founded upon simple descriptions of investor psychology. Agents are subject to various psychological tensions induced by market conditions and endowed with a minimal ‘personality’. This personality consists of a threshold level for each of the tensions being modeled, and the agent reacts whenever a tension threshold is reached. This paper considers an elementary model including just two such tensions. The first is ‘cowardice’, which is the stress caused by remaining in a minority position with respect to overall market sentiment and leads to herding-type behavior. The second is ‘inaction’, which is the increasing desire to act or re-evaluate one's investment position. There is no inductive learning by agents and they are only coupled via the global market price and overall market sentiment. Even incorporating just these two psychological tensions, important stylized facts of real market data, including fat-tails, excess kurtosis, uncorrelated price returns and clustered volatility over the timescale of a few days are reproduced. By then introducing an additional parameter that amplifies the effect of externally generated market noise during times of extreme market sentiment, long-time volatility correlations can also be recovered.
On the two steps threshold selection for over-threshold modelling of extreme events
NASA Astrophysics Data System (ADS)
Bernardara, Pietro; Mazas, Franck; Weiss, Jerome; Andreewsky, Marc; Kergadallan, Xavier; Benoit, Michel; Hamm, Luc
2013-04-01
The estimation of the probability of occurrence of extreme events is traditionally achieved by fitting a probability distribution on a sample of extreme observations. In particular, the extreme value theory (EVT) states that values exceeding a given threshold converge through a Generalized Pareto Distribution (GPD) if the original sample is composed of independent and identically distributed values. However, the temporal series of sea and ocean variables usually show strong temporal autocorrelation. Traditionally, in order to select independent events for the following statistical analysis, the concept of a physical threshold is introduced: events that excess that threshold are defined as "extreme events". This is the so-called "Peak Over a Threshold (POT)" sampling, widely spread in the literature and currently used for engineering applications among many others. In the past, the threshold for the statistical sampling of extreme values asymptotically convergent toward GPD and the threshold for the physical selection of independent extreme events were confused, as the same threshold was used for both sampling data and to meet the hypothesis of extreme value convergence, leading to some incoherencies. In particular, if the two steps are performed simultaneously, the number of peaks over the threshold can increase but also decrease when the threshold decreases. This is logic in a physical point of view, since the definition of the sample of "extreme events" changes, but is not coherent with the statistical theory. We introduce a two-steps threshold selection for over-threshold modelling, aiming to discriminate (i) a physical threshold for the selection of extreme and independent events, and (ii) a statistical threshold for the optimization of the coherence with the hypothesis of the EVT. The former is a physical events identification procedure (also called "declustering") aiming at selecting independent extreme events. The latter is a purely statistical optimization
Simulation of Population-Based Commuter Exposure to NO2 Using Different Air Pollution Models
Ragettli, Martina S.; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E.; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C.
2014-01-01
We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m−3, range: 21–61) than with a dispersion model with a lower resolution (39 ± 5 µg m−3; range: 24–51), and a land use regression model (41 ± 5 µg m−3; range: 24–54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas. PMID:24823664
Simulation of population-based commuter exposure to NO₂ using different air pollution models.
Ragettli, Martina S; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C
2014-05-01
We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m(-3), range: 21-61) than with a dispersion model with a lower resolution (39 ± 5 µg m(-3); range: 24-51), and a land use regression model (41 ± 5 µg m(-3); range: 24-54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas. PMID:24823664
Troughs under threshold modeling of minimum flows in perennial streams
NASA Astrophysics Data System (ADS)
Önöz, B.; Bayazit, M.
2002-02-01
Troughs under threshold analysis has so far found little application in the modeling of minimum streamflows. In this study, all the troughs under a certain threshold level are considered in deriving the probability distribution of annual minima through the total probability theorem. For the occurrence of minima under the threshold, Poissonian, binomial or negative binomial processes are assumed. The magnitude of minima follows the generalized Pareto, exponential or power distribution. It is shown that asymptotic extreme value distributions for minima or the two-parameter Weibull distribution is obtained for the annual minima, depending on which models are assumed for the occurrence and magnitude of troughs under the threshold. Derived distributions can be used for modeling the minimum flows in streams which do not have zero flows. Expressions for the T-year annual minimum flow are obtained. An example illustrates the application of the troughs under threshold model to the minimum flows observed in a stream.
Yao, Jiayun; Eyamie, Jeff; Henderson, Sarah B
2016-05-01
Exposure to forest fire smoke (FFS) is associated with multiple adverse health effects, mostly respiratory. Findings for cardiovascular effects have been inconsistent, possibly related to the limitations of conventional methods to assess FFS exposure. In previous work, we developed an empirical model to estimate smoke-related fine particulate matter (PM2.5) for all populated areas in British Columbia (BC), Canada. Here, we evaluate the utility of our model by comparing epidemiologic associations between modeled and measured PM2.5. For each local health area (LHA), we used Poisson regression to estimate the effects of PM2.5 estimates and measurements on counts of medication dispensations and outpatient physician visits. We then used meta-regression to estimate the overall effects. A 10 μg/m(3) increase in modeled PM2.5 was associated with increased sabutamol dispensations (RR=1.04, 95% CI 1.03-1.06), and physician visits for asthma (1.06, 1.04-1.08), COPD (1.02, 1.00-1.03), lower respiratory infections (1.03, 1.00-1.05), and otitis media (1.05, 1.03-1.07), all comparable to measured PM2.5. Effects on cardiovascular outcomes were only significant using model estimates in all LHAs during extreme fire days. This suggests that the exposure model is a promising tool for increasing the power of epidemiologic studies to detect the health effects of FFS via improved spatial coverage and resolution. PMID:25294305
The adverse effect of spasticity on 3-month poststroke outcome using a population-based model.
Belagaje, S R; Lindsell, C; Moomaw, C J; Alwell, K; Flaherty, M L; Woo, D; Dunning, K; Khatri, P; Adeoye, O; Kleindorfer, D; Broderick, J; Kissela, B
2014-01-01
Several devices and medications have been used to address poststroke spasticity. Yet, spasticity's impact on outcomes remains controversial. Using data from a cohort of 460 ischemic stroke patients, we previously published a validated multivariable regression model for predicting 3-month modified Rankin Score (mRS) as an indicator of functional outcome. Here, we tested whether including spasticity improved model fit and estimated the effect spasticity had on the outcome. Spasticity was defined by a positive response to the question "Did you have spasticity following your stroke?" on direct interview at 3 months from stroke onset. Patients who had expired by 90 days (n = 30) or did not have spasticity data available (n = 102) were excluded. Spasticity affected the 3-month functional status (β = 0.420, 95 CI = 0.194 to 0.645) after accounting for age, diabetes, leukoaraiosis, and retrospective NIHSS. Using spasticity as a covariable, the model's R (2) changed from 0.599 to 0.622. In our model, the presence of spasticity in the cohort was associated with a worsened 3-month mRS by an average of 0.4 after adjusting for known covariables. This significant adverse effect on functional outcomes adds predictive value beyond previously established factors. PMID:25147752
The Adverse Effect of Spasticity on 3-Month Poststroke Outcome Using a Population-Based Model
Belagaje, S. R.; Lindsell, C.; Moomaw, C. J.; Alwell, K.; Flaherty, M. L.; Woo, D.; Dunning, K.; Khatri, P.; Adeoye, O.; Kleindorfer, D.; Broderick, J.; Kissela, B.
2014-01-01
Several devices and medications have been used to address poststroke spasticity. Yet, spasticity's impact on outcomes remains controversial. Using data from a cohort of 460 ischemic stroke patients, we previously published a validated multivariable regression model for predicting 3-month modified Rankin Score (mRS) as an indicator of functional outcome. Here, we tested whether including spasticity improved model fit and estimated the effect spasticity had on the outcome. Spasticity was defined by a positive response to the question “Did you have spasticity following your stroke?” on direct interview at 3 months from stroke onset. Patients who had expired by 90 days (n = 30) or did not have spasticity data available (n = 102) were excluded. Spasticity affected the 3-month functional status (β = 0.420, 95 CI = 0.194 to 0.645) after accounting for age, diabetes, leukoaraiosis, and retrospective NIHSS. Using spasticity as a covariable, the model's R2 changed from 0.599 to 0.622. In our model, the presence of spasticity in the cohort was associated with a worsened 3-month mRS by an average of 0.4 after adjusting for known covariables. This significant adverse effect on functional outcomes adds predictive value beyond previously established factors. PMID:25147752
Balaguer, Francesc; Balmaña, Judith; Castellví-Bel, Sergi; Steyerberg, Ewout W.; Andreu, Montserrat; Llor, Xavier; Jover, Rodrigo; Syngal, Sapna; Castells, Antoni
2008-01-01
Summary Background and aims Early recognition of patients at risk for Lynch syndrome is critical but often difficult. Recently, a predictive algorithm -the PREMM1,2 model- has been developed to quantify the risk of carrying a germline mutation in the mismatch repair (MMR) genes, MLH1 and MSH2. However, its performance in an unselected, population-based colorectal cancer population as well as its performance in combination with tumor MMR testing are unknown. Methods We included all colorectal cancer cases from the EPICOLON study, a prospective, multicenter, population-based cohort (n=1,222). All patients underwent tumor microsatellite instability analysis and immunostaining for MLH1 and MSH2, and those with MMR deficiency (n=91) underwent tumor BRAF V600E mutation analysis and MLH1/MSH2 germline testing. Results The PREMM1,2 model with a ≥5% cut-off had a sensitivity, specificity and positive predictive value (PPV) of 100%, 68% and 2%, respectively. The use of a higher PREMM1,2 cut-off provided a higher specificity and PPV, at expense of a lower sensitivity. The combination of a ≥5% cut-off with tumor MMR testing maintained 100% sensitivity with an increased specificity (97%) and PPV (21%). The PPV of a PREMM1,2 score ≥20% alone (16%) approached the PPV obtained with PREMM1,2 score ≥5% combined with tumor MMR testing. In addition, a PREMM1,2 score of <5% was associated with a high likelihood of a BRAF V600E mutation. Conclusions The PREMM1,2 model is useful to identify MLH1/MSH2 mutation carriers among unselected colorectal cancer patients. Quantitative assessment of the genetic risk might be useful to decide on subsequent tumor MMR and germline testing. PMID:18061181
Mathematical model for adaptive evolution of populations based on a complex domain
Ibrahim, Rabha W.; Ahmad, M.Z.; Al-Janaby, Hiba F.
2015-01-01
A mutation is ultimately essential for adaptive evolution in all populations. It arises all the time, but is mostly fixed by enzymes. Further, most do consider that the evolution mechanism is by a natural assortment of variations in organisms in line for random variations in their DNA, and the suggestions for this are overwhelming. The altering of the construction of a gene, causing a different form that may be communicated to succeeding generations, produced by the modification of single base units in DNA, or the deletion, insertion, or rearrangement of larger units of chromosomes or genes. This altering is called a mutation. In this paper, a mathematical model is introduced to this reality. The model describes the time and space for the evolution. The tool is based on a complex domain for the space. We show that the evolution is distributed with the hypergeometric function. The Boundedness of the evolution is imposed by utilizing the Koebe function. PMID:26858564
Meisner, Søren; Lehur, Paul-Antoine; Moran, Brendan; Martins, Lina; Jemec, Gregor Borut Ernst
2012-01-01
Background Peristomal skin complications (PSCs) are the most common post-operative complications following creation of a stoma. Living with a stoma is a challenge, not only for the patient and their carers, but also for society as a whole. Due to methodological problems of PSC assessment, the associated health-economic burden of medium to longterm complications has been poorly described. Aim The aim of the present study was to create a model to estimate treatment costs of PSCs using the standardized assessment Ostomy Skin Tool as a reference. The resultant model was applied to a real-life global data set of stoma patients (n = 3017) to determine the prevalence and financial burden of PSCs. Methods Eleven experienced stoma care nurses were interviewed to get a global understanding of a treatment algorithm that formed the basis of the cost analysis. The estimated costs were based on a seven week treatment period. PSC costs were estimated for five underlying diagnostic categories and three levels of severity. The estimated treatment costs of severe cases of PSCs were increased 2–5 fold for the different diagnostic categories of PSCs compared with mild cases. French unit costs were applied to the global data set. Results The estimated total average cost for a seven week treatment period (including appliances and accessories) was 263€ for those with PSCs (n = 1742) compared to 215€ for those without PSCs (n = 1172). A co-variance analysis showed that leakage level had a significant impact on PSC cost from ‘rarely/never’ to ‘always/often’ p<0.00001 and from ‘rarely/never’ to ‘sometimes’ p = 0.0115. Conclusion PSCs are common and troublesome and the consequences are substantial, both for the patient and from a health economic viewpoint. PSCs should be diagnosed and treated at an early stage to prevent long term, debilitating and expensive complications. PMID:22679479
Analysis of amyotrophic lateral sclerosis as a multistep process: a population-based modelling study
Al-Chalabi, Ammar; Calvo, Andrea; Chio, Adriano; Colville, Shuna; Ellis, Cathy M; Hardiman, Orla; Heverin, Mark; Howard, Robin S; Huisman, Mark H B; Keren, Noa; Leigh, P Nigel; Mazzini, Letizia; Mora, Gabriele; Orrell, Richard W; Rooney, James; Scott, Kirsten M; Scotton, William J; Seelen, Meinie; Shaw, Christopher E; Sidle, Katie S; Swingler, Robert; Tsuda, Miho; Veldink, Jan H; Visser, Anne E; van den Berg, Leonard H; Pearce, Neil
2014-01-01
Summary Background Amyotrophic lateral sclerosis shares characteristics with some cancers, such as onset being more common in later life, progression usually being rapid, the disease affecting a particular cell type, and showing complex inheritance. We used a model originally applied to cancer epidemiology to investigate the hypothesis that amyotrophic lateral sclerosis is a multistep process. Methods We generated incidence data by age and sex from amyotrophic lateral sclerosis population registers in Ireland (registration dates 1995–2012), the Netherlands (2006–12), Italy (1995–2004), Scotland (1989–98), and England (2002–09), and calculated age and sex-adjusted incidences for each register. We regressed the log of age-specific incidence against the log of age with least squares regression. We did the analyses within each register, and also did a combined analysis, adjusting for register. Findings We identified 6274 cases of amyotrophic lateral sclerosis from a catchment population of about 34 million people. We noted a linear relationship between log incidence and log age in all five registers: England r2=0·95, Ireland r2=0·99, Italy r2=0·95, the Netherlands r2=0·99, and Scotland r2=0·97; overall r2=0·99. All five registers gave similar estimates of the linear slope ranging from 4·5 to 5·1, with overlapping confidence intervals. The combination of all five registers gave an overall slope of 4·8 (95% CI 4·5–5·0), with similar estimates for men (4·6, 4·3–4·9) and women (5·0, 4·5–5·5). Interpretation A linear relationship between the log incidence and log age of onset of amyotrophic lateral sclerosis is consistent with a multistage model of disease. The slope estimate suggests that amyotrophic lateral sclerosis is a six-step process. Identification of these steps could lead to preventive and therapeutic avenues. Funding UK Medical Research Council; UK Economic and Social Research Council; Ireland Health Research Board; The
Watson, Diane E.; Broemeling, Anne-Marie; Wong, Sabrina T.
2009-01-01
A conceptual framework for population-based information systems is needed if these data are to be created and used to generate information to support healthcare policy, management and practice communities that seek to improve quality and account for progress in primary healthcare (PHC) renewal. This paper describes work conducted in British Columbia since 2003 to (1) create a Results-Based Logic Model for PHC using the approach of the Treasury Board of Canada in designing management and accountability frameworks, together with a literature review, policy analysis and broad consultation with approximately 650 people, (2) identify priorities for information within that logic model, (3) use the logic model and priorities within it to implement performance measurement and research and (4) identify how information systems need to be structured to assess the impact of variation or change in PHC inputs, activities and outputs on patient, population and healthcare system outcomes. The resulting logic model distinguishes among outcomes for which the PHC sector should be held more or less accountable. PMID:21037902
Effect of model uncertainty on failure detection - The threshold selector
NASA Technical Reports Server (NTRS)
Emami-Naeini, Abbas; Akhter, Muhammad M.; Rock, Stephen M.
1988-01-01
The performance of all failure detection, isolation, and accomodation (DIA) algorithms is influenced by the presence of model uncertainty. A unique framework is presented to incorporate a knowledge of modeling error in the analysis and design of failure detection systems. The tools being used are very similar to those in robust control theory. A concept is introduced called the threshold selector, which is a nonlinear inequality whose solution defines the set of detectable sensor failure signals. The threshold selector represents an innovative tool for analysis and synthesis of DIA algorithms. It identifies the optimal threshold to be used in innovations-based DIA algorithms. The optimal threshold is shown to be a function of the bound on modeling errors, the noise properties, the speed of DIA filters, and the classes of reference and failure signals. The size of the smallest detectable failure is also determined. The results are applied to a multivariable turbofan jet engine example, which demonstrates improvements compared to previous studies.
Uncertainties in the Modelled CO2 Threshold for Antarctic Glaciation
NASA Technical Reports Server (NTRS)
Gasson, E.; Lunt, D. J.; DeConto, R.; Goldner, A.; Heinemann, M.; Huber, M.; LeGrande, A. N.; Pollard, D.; Sagoo, N.; Siddall, M.; Winguth, A.; Valdes, P. J.
2014-01-01
frequently cited atmospheric CO2 threshold for the onset of Antarctic glaciation of approximately780 parts per million by volume is based on the study of DeConto and Pollard (2003) using an ice sheet model and the GENESIS climate model. Proxy records suggest that atmospheric CO2 concentrations passed through this threshold across the Eocene-Oligocene transition approximately 34 million years. However, atmospheric CO2 concentrations may have been close to this threshold earlier than this transition, which is used by some to suggest the possibility of Antarctic ice sheets during the Eocene. Here we investigate the climate model dependency of the threshold for Antarctic glaciation by performing offline ice sheet model simulations using the climate from 7 different climate models with Eocene boundary conditions (HadCM3L, CCSM3, CESM1.0, GENESIS, FAMOUS, ECHAM5 and GISS_ER). These climate simulations are sourced from a number of independent studies, and as such the boundary conditions, which are poorly constrained during the Eocene, are not identical between simulations. The results of this study suggest that the atmospheric CO2 threshold for Antarctic glaciation is highly dependent on the climate model used and the climate model configuration. A large discrepancy between the climate model and ice sheet model grids for some simulations leads to a strong sensitivity to the lapse rate parameter.
Octave-Band Thresholds for Modeled Reverberant Fields
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. PMID:26040608
Cascades in the Threshold Model for varying system sizes
NASA Astrophysics Data System (ADS)
Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy
2015-03-01
A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.
Modeling threshold detection and search for point and extended sources
NASA Astrophysics Data System (ADS)
Friedman, Melvin
2016-05-01
This paper deals with three separate topics. 1)The Berek extended object threshold detection model is described, calibrated against a portion of Blackwell's 1946 naked eye threshold detection data for extended objects against an unstructured background, and then the remainder of Blackwell's data is used to verify and validate the model. A range equation is derived from Berek's model which allows threshold detection range to be predicted for extended to point objects against an un-cluttered background as a function of target size and adapting luminance levels. The range equation is then used to model threshold detection of stationary reflective and self-luminous targets against an uncluttered background. 2) There is uncertainty whether Travnikova's search data for point source detection against an un-cluttered background is described by Rayleigh or exponential distributions. A model which explains the Rayleigh distribution for barely perceptible objects and the exponential distribution for brighter objects is given. 3) A technique is presented which allows a specific observer's target acquisition capability to be characterized. Then a model is presented which describes how individual target acquisition probability grows when a specific observer or combination of specific observers search for targets. Applications for the three topics are discussed.
Inflection, canards and excitability threshold in neuronal models.
Desroches, M; Krupa, M; Rodrigues, S
2013-10-01
A technique is presented, based on the differential geometry of planar curves, to evaluate the excitability threshold of neuronal models. The aim is to determine regions of the phase plane where solutions to the model equations have zero local curvature, thereby defining a zero-curvature (inflection) set that discerns between sub-threshold and spiking electrical activity. This transition can arise through a Hopf bifurcation, via the so-called canard explosion that happens in an exponentially small parameter variation, and this is typical for a large class of planar neuronal models (FitzHugh-Nagumo, reduced Hodgkin-Huxley), namely, type II neurons (resonators). This transition can also correspond to the crossing of the stable manifold of a saddle equilibrium, in the case of type I neurons (integrators). We compute inflection sets and study how well they approximate the excitability threshold of these neuron models, that is, both in the canard and in the non-canard regime, using tools from invariant manifold theory and singularity theory. With the latter, we investigate the topological changes that inflection sets undergo upon parameter variation. Finally, we show that the concept of inflection set gives a good approximation of the threshold in both the so-called resonator and integrator neuronal cases. PMID:22945512
Modeling the Interactions Between Multiple Crack Closure Mechanisms at Threshold
NASA Technical Reports Server (NTRS)
Newman, John A.; Riddell, William T.; Piascik, Robert S.
2003-01-01
A fatigue crack closure model is developed that includes interactions between the three closure mechanisms most likely to occur at threshold; plasticity, roughness, and oxide. This model, herein referred to as the CROP model (for Closure, Roughness, Oxide, and Plasticity), also includes the effects of out-of plane cracking and multi-axial loading. These features make the CROP closure model uniquely suited for, but not limited to, threshold applications. Rough cracks are idealized here as two-dimensional sawtooths, whose geometry induces mixed-mode crack- tip stresses. Continuum mechanics and crack-tip dislocation concepts are combined to relate crack face displacements to crack-tip loads. Geometric criteria are used to determine closure loads from crack-face displacements. Finite element results, used to verify model predictions, provide critical information about the locations where crack closure occurs.
Effect of threshold disorder on the quorum percolation model.
Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel
2016-07-01
We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities. PMID:27575157
Effect of threshold disorder on the quorum percolation model
NASA Astrophysics Data System (ADS)
Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel
2016-07-01
We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.
Oscillation threshold of a clarinet model: a numerical continuation approach.
Karkar, Sami; Vergez, Christophe; Cochelin, Bruno
2012-01-01
This paper focuses on the oscillation threshold of single reed instruments. Several characteristics such as blowing pressure at threshold, regime selection, and playing frequency are known to change radically when taking into account the reed dynamics and the flow induced by the reed motion. Previous works have shown interesting tendencies, using analytical expressions with simplified models. In the present study, a more elaborated physical model is considered. The influence of several parameters, depending on the reed properties, the design of the instrument or the control operated by the player, are studied. Previous results on the influence of the reed resonance frequency are confirmed. New results concerning the simultaneous influence of two model parameters on oscillation threshold, regime selection and playing frequency are presented and discussed. The authors use a numerical continuation approach. Numerical continuation consists in following a given solution of a set of equations when a parameter varies. Considering the instrument as a dynamical system, the oscillation threshold problem is formulated as a path following of Hopf bifurcations, generalizing the usual approach of the characteristic equation, as used in previous works. The proposed numerical approach proves to be useful for the study of musical instruments. It is complementary to analytical analysis and direct time-domain or frequency-domain simulations since it allows to derive information that is hardly reachable through simulation, without the approximations needed for analytical approach. PMID:22280691
Oscillation threshold of a clarinet model: A numerical continuation approach
NASA Astrophysics Data System (ADS)
Karkar, Sami; Vergez, Christophe; Cochelin, Bruno
This paper focuses on the oscillation threshold of single reed instruments. Several characteristics such as blowing pressure at threshold, regime selection, and playing frequency are known to change radically when taking into account the reed dynamics and the flow induced by the reed motion. Previous works have shown interesting tendencies, using analytical expressions with simplified models. In the present study, a more elaborated physical model is considered. The influence of several parameters, depending on the reed properties, the design of the instrument or the control operated by the player, are studied. Previous results on the influence of the reed resonance frequency are confirmed. New results concerning the simultaneous influence of two model parameters on oscillation threshold, regime selection and playing frequency are presented and discussed. The authors use a numerical continuation approach. Numerical continuation consists in following a given solution of a set of equations when a parameter varies. Considering the instrument as a dynamical system, the oscillation threshold problem is formulated as a path following of Hopf bifurcations, generalizing the usual approach of the characteristic equation, as used in previous works. The proposed numerical approach proves to be useful for the study of musical instruments. It is complementary to analytical analysis and direct time-domain or frequency-domain simulations since it allows to derive information that is hardly reachable through simulation, without the approximations needed for analytical approach.
Grebenstein, Patricia E.; Burroughs, Danielle; Roiko, Samuel A.; Pentel, Paul R.; LeSage, Mark G.
2015-01-01
Background The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. Methods The present study examined these issues in a rodent nicotine self- administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Results Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. Conclusions These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. PMID:25891231
Discrete threshold versus continuous strength models of perceptual recognition.
Paap, K R; Chun, E; Vonnahme, P
1999-12-01
Two experiments were designed to test discrete-threshold models of letter and word recognition against models that assume that decision criteria are applied to measures of continuous strength. Although our goal is to adjudicate this matter with respect to broad classes of models, some of the specific predictions for discrete-threshold are generated from Grainger and Jacobs' (1994) Dual-Readout Model (DROM) and some of the predictions for continuous strength are generated from a revised version of the Activation-Verification Model (Paap, Newsome, McDonald, & Schvaneveldt, 1982). Experiment 1 uses a two-alternative forced-choice task that is followed by an assessment of confidence and then a whole report if a word is recognized. Factors are manipulated to assess the presence or magnitude of a neighbourhood-frequency effect, a lexical-bias effect, a word-superiority effect, and a pseudoword advantage. Several discrepancies between DROM's predictions and the obtained data are noted. Both types of models were also used to predict the distribution of responses across the levels of confidence for each individual participant. The predictions based on continuous strength were superior. Experiment 2 used a same-different task and confidence ratings to enable the generation of receiver operating characteristics (ROCs). The shapes of the ROCs are more consistent with the continuous strength assumption than with a discrete threshold. PMID:10646200
A phenomenological model of myelinated nerve with a dynamic threshold.
Morse, R P; Allingham, D; Stocks, N G
2015-10-01
To evaluate coding strategies for cochlear implants a model of the human cochlear nerve is required. Nerve models based on voltage-clamp experiments, such as the Frankenhaeuser-Huxley model of myelinated nerve, can have over forty parameters and are not amenable for fitting to physiological data from a different animal or type of nerve. Phenomenological nerve models, such as leaky integrate-and-fire (LIF) models, have fewer parameters but have not been validated with a wide range of stimuli. In the absence of substantial cochlear nerve data, we have used data from a toad sciatic nerve for validation (50 Hz to 2 kHz with levels up to 20 dB above threshold). We show that the standard LIF model with fixed refractory properties and a single set of parameters cannot adequately predict the toad rate-level functions. Given the deficiency of this standard model, we have abstracted the dynamics of the sodium inactivation variable in the Frankenhaeuser-Huxley model to develop a phenomenological LIF model with a dynamic threshold. This nine-parameter model predicts the physiological rate-level functions much more accurately than the standard LIF model. Because of the low number of parameters, we expect to be able to optimize the model parameters so that the model is more appropriate for cochlear implant simulations. PMID:26141642
NASA Astrophysics Data System (ADS)
Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan
2012-12-01
The aim of this study is to develop a novel semiautomatic bladder segmentation approach for selecting the appropriate plan from the library of plans for a multiple-plan adaptive radiotherapy (ART) procedure. A population-based statistical bladder model was first built from a training data set (95 bladder contours from 8 patients). This model was then used as constraint to segment the bladder in an independent validation data set (233 CBCT scans from the remaining 22 patients). All 3D bladder contours were converted into parametric surface representations using spherical harmonic expansion. Principal component analysis (PCA) was applied in the spherical harmonic-based shape parameter space to calculate the major variation of bladder shapes. The number of dominating PCA modes was chosen such that 95% of the total shape variation of the training data set was described. The automatic segmentation started from the bladder contour of the planning CT of each patient, which was modified by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to best fit the bladder boundary in the localization CBCT image. A cost function was defined to measure the goodness of fit of the segmentation on the localization CBCT image. The segmentation was obtained by minimizing this cost function using a simplex optimizer. After automatic segmentation, a fast manual correction method was provided to correct those bladders (parts) that were poorly segmented. Volume- and distance-based metrics and the accuracy of plan selection from multiple plans were evaluated to quantify the performance of the automatic and semiautomatic segmentation methods. For the training data set, only seven PCA modes were needed to represent 95% of the bladder shape variation. The mean CI overlap and residual error (SD) of automatic bladder segmentation over all of the validation data were 70.5% and 0.39 cm, respectively. The agreement of plan
Compact modeling of perpendicular nanomagnetic logic based on threshold gates
NASA Astrophysics Data System (ADS)
Breitkreutz, Stephan; Eichwald, Irina; Kiermaier, Josef; Csaba, Gyorgy; Schmitt-Landsiedel, Doris; Becherer, Markus
2014-05-01
In this work, we show that physical-based compact modeling of perpendicular Nanomagnetic Logic is crucial for the design and simulation of complex circuitry. A compact model for field-coupled nanomagnets based on an Arrhenius switching model and finite element calculations is introduced. As physical parameters have an enormous influence on the behavior of the circuit, their modeling is of great importance. Exemplarily, a 1-bit full adder based on threshold logic gates is analyzed due to its reliability. The obtained findings are used to design a pure magnetic arithmetic logic unit, which can be used for basic Boolean and logic operations.
The interplay between cooperativity and diversity in model threshold ensembles.
Cervera, Javier; Manzanares, José A; Mafe, Salvador
2014-10-01
The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516
The interplay between cooperativity and diversity in model threshold ensembles
Cervera, Javier; Manzanares, José A.; Mafe, Salvador
2014-01-01
The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516
Selection Strategies for Social Influence in the Threshold Model
NASA Astrophysics Data System (ADS)
Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy
The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.
Kapsokalivas, L; Gan, X; Albrecht, A A; Steinhöfel, K
2009-08-01
We present experimental results on benchmark problems in 3D cubic lattice structures with the Miyazawa-Jernigan energy function for two local search procedures that utilise the pull-move set: (i) population-based local search (PLS) that traverses the energy landscape with greedy steps towards (potential) local minima followed by upward steps up to a certain level of the objective function; (ii) simulated annealing with a logarithmic cooling schedule (LSA). The parameter settings for PLS are derived from short LSA-runs executed in pre-processing and the procedure utilises tabu lists generated for each member of the population. In terms of the total number of energy function evaluations both methods perform equally well, however, PLS has the potential of being parallelised with an expected speed-up in the region of the population size. Furthermore, both methods require a significant smaller number of function evaluations when compared to Monte Carlo simulations with kink-jump moves. PMID:19647489
A damage model based on failure threshold weakening
NASA Astrophysics Data System (ADS)
Gran, Joseph D.; Rundle, John B.; Turcotte, Donald L.; Holliday, James R.; Klein, William
2011-04-01
A variety of studies have modeled the physics of material deformation and damage as examples of generalized phase transitions, involving either critical phenomena or spinodal nucleation. Here we study a model for frictional sliding with long-range interactions and recurrent damage that is parameterized by a process of damage and partial healing during sliding. We introduce a failure threshold weakening parameter into the cellular automaton slider-block model which allows blocks to fail at a reduced failure threshold for all subsequent failures during an event. We show that a critical point is reached beyond which the probability of a system-wide event scales with this weakening parameter. We provide a mapping to the percolation transition, and show that the values of the scaling exponents approach the values for mean-field percolation (spinodal nucleation) as lattice size L is increased for fixed R. We also examine the effect of the weakening parameter on the frequency-magnitude scaling relationship and the ergodic behavior of the model.
Threshold dynamics of a malaria transmission model in periodic environment
NASA Astrophysics Data System (ADS)
Wang, Lei; Teng, Zhidong; Zhang, Tailei
2013-05-01
In this paper, we propose a malaria transmission model with periodic environment. The basic reproduction number R0 is computed for the model and it is shown that the disease-free periodic solution of the model is globally asymptotically stable when R0<1, that is, the disease goes extinct when R0<1, while the disease is uniformly persistent and there is at least one positive periodic solution when R0>1. It indicates that R0 is the threshold value determining the extinction and the uniform persistence of the disease. Finally, some examples are given to illustrate the main theoretical results. The numerical simulations show that, when the disease is uniformly persistent, different dynamic behaviors may be found in this model, such as the global attractivity and the chaotic attractor.
Model to Estimate Threshold Mechanical Stability of Lower Lateral Cartilage
Kim, James Hakjune; Hamamoto, Ashley; Kiyohara, Nicole; Wong, Brian J. F.
2015-01-01
IMPORTANCE In rhinoplasty, techniques used to alter the shape of the nasal tip often compromise the structural stability of the cartilage framework in the nose. Determining the minimum threshold level of cartilage stiffness required to maintain long-term structural stability is a critical aspect in performing these surgical maneuvers. OBJECTIVE To quantify the minimum threshold mechanical stability (elastic modulus) of lower lateral cartilage (LLC) according to expert opinion. METHODS Five anatomically correct LLC phantoms were made from urethane via a 3-dimensional computer modeling and injection molding process. All 5 had identical geometry but varied in stiffness along the intermediate crural region (0.63–30.6 MPa). DESIGN, SETTING, AND PARTICIPANTS A focus group of experienced rhinoplasty surgeons (n = 33) was surveyed at a regional professional meeting on October 25, 2013. Each survey participant was presented the 5 phantoms in a random order and asked to arrange the phantoms in order of increasing stiffness based on their sense of touch. Then, they were asked to select a single phantom out of the set that they believed to have the minimum acceptable mechanical stability for LLC to maintain proper form and function. MAIN OUTCOMES AND MEASURES A binary logistic regression was performed to calculate the probability of mechanical acceptability as a function of the elastic modulus of the LLC based on survey data. A Hosmer-Lemeshow test was performed to measure the goodness of fit between the logistic regression and survey data. The minimum threshold mechanical stability for LLC was taken at a 50% acceptability rating. RESULTS Phantom 4 was selected most frequently by the participants as having the minimum acceptable stiffness for LLC intermediate care. The minimum threshold mechanical stability for LLC was determined to be 3.65 MPa. The Hosmer-Lemeshow test revealed good fit between the logistic regression and survey data ( χ32=0.92 , P = .82). CONCLUSIONS AND
Terrestrial Microgravity Model and Threshold Gravity Simulation using Magnetic Levitation
NASA Technical Reports Server (NTRS)
Ramachandran, N.
2005-01-01
What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successfully simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars. The paper will discuss experiments md modeling work to date in support of this project.
A model based rule for selecting spiking thresholds in neuron models.
Mikkelsen, Frederik Riis
2016-06-01
Determining excitability thresholds in neuronal models is of high interest due to its applicability in separating spiking from non-spiking phases of neuronal membrane potential processes. However, excitability thresholds are known to depend on various auxiliary variables, including any conductance or gating variables. Such dependences pose as a double-edged sword; they are natural consequences of the complexity of the model, but proves difficult to apply in practice, since gating variables are rarely measured. In this paper a technique for finding excitability thresholds, based on the local behaviour of the flow in dynamical systems, is presented. The technique incorporates the dynamics of the auxiliary variables, yet only produces thresholds for the membrane potential. The method is applied to several classical neuron models and the threshold's dependence upon external parameters is studied, along with a general evaluation of the technique. PMID:27106187
NASA Astrophysics Data System (ADS)
Solari, S.; Losada, M. A.
2012-10-01
This paper explores the use of a mixture model for determining the marginal distribution of hydrological variables, consisting of a truncated central distribution that is representative of the central or main-mass regime, which for the cases studied is a lognormal distribution, and of two generalized Pareto distributions for the maximum and minimum regimes, representing the upper and lower tails, respectively. The thresholds defining the limits between these regimes and the central regime are parameters of the model and are calculated together with the remaining parameters by maximum likelihood. After testing the model with a simulation study we concluded that the upper threshold of the model can be used when applying the peak over threshold method. This will yield an automatic and objective identification of the threshold presenting an alternative to existing methods. The model was also applied to four hydrological data series: two mean daily flow series, the Thames at Kingston (United Kingdom), and the Guadalfeo River at Orgiva (Spain); and two daily precipitation series, Fort Collins (CO, USA), and Orgiva (Spain). It was observed that the model improved the fit of the data series with respect to the fit obtained with the lognormal (LN) and, in particular, provided a good fit for the upper tail. Moreover, we concluded that the proposed model is able to accommodate the entire range of values of some significant hydrological variables.
A random graph model of density thresholds in swarming cells.
Jena, Siddhartha G
2016-03-01
Swarming behaviour is a type of bacterial motility that has been found to be dependent on reaching a local density threshold of cells. With this in mind, the process through which cell-to-cell interactions develop and how an assembly of cells reaches collective motility becomes increasingly important to understand. Additionally, populations of cells and organisms have been modelled through graphs to draw insightful conclusions about population dynamics on a spatial level. In the present study, we make use of analogous random graph structures to model the formation of large chain subgraphs, representing interactions between multiple cells, as a random graph Markov process. Using numerical simulations and analytical results on how quickly paths of certain lengths are reached in a random graph process, metrics for intercellular interaction dynamics at the swarm layer that may be experimentally evaluated are proposed. PMID:26893102
Stylized facts from a threshold-based heterogeneous agent model
NASA Astrophysics Data System (ADS)
Cross, R.; Grinfeld, M.; Lamba, H.; Seaman, T.
2007-05-01
A class of heterogeneous agent models is investigated where investors switch trading position whenever their motivation to do so exceeds some critical threshold. These motivations can be psychological in nature or reflect behaviour suggested by the efficient market hypothesis (EMH). By introducing different propensities into a baseline model that displays EMH behaviour, one can attempt to isolate their effects upon the market dynamics. The simulation results indicate that the introduction of a herding propensity results in excess kurtosis and power-law decay consistent with those observed in actual return distributions, but not in significant long-term volatility correlations. Possible alternatives for introducing such long-term volatility correlations are then identified and discussed.
Terrestrial Microgravity Model and Threshold Gravity Simulation sing Magnetic Levitation
NASA Technical Reports Server (NTRS)
Ramachandran, N.
2005-01-01
What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successiblly simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars.
Towards A Complete Model Of Photopic Visual Threshold Performance
NASA Astrophysics Data System (ADS)
Overington, I.
1982-02-01
Based on a wide variety of fragmentary evidence taken from psycho-physics, neurophysiology and electron microscopy, it has been possible to put together a very widely applicable conceptual model of photopic visual threshold performance. Such a model is so complex that a single comprehensive mathematical version is excessively cumbersome. It is, however, possible to set up a suite of related mathematical models, each of limited application but strictly known envelope of usage. Such models may be used for assessment of a variety of facets of visual performance when using display imagery, including effects and interactions of image quality, random and discrete display noise, viewing distance, image motion, etc., both for foveal interrogation tasks and for visual search tasks. The specific model may be selected from the suite according to the assessment task in hand. The paper discusses in some depth the major facets of preperceptual visual processing and their interaction with instrumental image quality and noise. It then highlights the statistical nature of visual performance before going on to consider a number of specific mathematical models of partial visual function. Where appropriate, these are compared with widely popular empirical models of visual function.
A threshold model of content knowledge transfer for socioscientific argumentation
NASA Astrophysics Data System (ADS)
Sadler, Troy D.; Fowler, Samantha R.
2006-11-01
This study explores how individuals make use of scientific content knowledge for socioscientific argumentation. More specifically, this mixed-methods study investigates how learners apply genetics content knowledge as they justify claims relative to genetic engineering. Interviews are conducted with 45 participants, representing three distinct groups: high school students with variable genetics knowledge, college nonscience majors with little genetics knowledge, and college science majors with advanced genetics knowledge. During the interviews, participants advance positions concerning three scenarios dealing with gene therapy and cloning. Arguments are assessed in terms of the number of justifications offered as well as justification quality, based on a five-point rubric. Multivariate analysis of variance results indicate that college science majors outperformed the other groups in terms of justification quality and frequency. Argumentation does not differ among nonscience majors or high school students. Follow-up qualitative analyses of interview responses suggest that all three groups tend to focus on similar, sociomoral themes as they negotiate socially complex, genetic engineering issues, but that the science majors frequently reference specific science content knowledge in the justification of their claims. Results support the Threshold Model of Content Knowledge Transfer, which proposes two knowledge thresholds around which argumentation quality can reasonably be expected to increase. Research and educational implications of these findings are discussed.
Devenyi, Ryan A; Sobie, Eric A
2016-07-01
While many ion channels and transporters involved in cardiac cellular physiology have been identified and described, the relative importance of each in determining emergent cellular behaviors remains unclear. Here we address this issue with a novel approach that combines population-based mathematical modeling with experimental tests to systematically quantify the relative contributions of different ion channels and transporters to the amplitude of the cellular Ca(2+) transient. Sensitivity analysis of a mathematical model of the rat ventricular cardiomyocyte quantified the response of cell behaviors to changes in the level of each ion channel and transporter, and experimental tests of these predictions were performed to validate or invalidate the predictions. The model analysis found that partial inhibition of the transient outward current in rat ventricular epicardial myocytes was predicted to have a greater impact on Ca(2+) transient amplitude than either: (1) inhibition of the same current in endocardial myocytes, or (2) comparable inhibition of the sarco/endoplasmic reticulum Ca(2+) ATPase (SERCA). Experimental tests confirmed the model predictions qualitatively but showed some quantitative disagreement. This guided us to recalibrate the model by adjusting the relative importance of several Ca(2+) fluxes, thereby improving the consistency with experimental data and producing a more predictive model. Analysis of human cardiomyocyte models suggests that the relative importance of outward currents to Ca(2+) transporters is generalizable to human atrial cardiomyocytes, but not ventricular cardiomyocytes. Overall, our novel approach of combining population-based mathematical modeling with experimental tests has yielded new insight into the relative importance of different determinants of cell behavior. PMID:26235057
Xie, Feng; Luo, Nan; Lee, Hin-Peng
2008-01-01
AIM: To compare the costs and effectiveness of no screening and no eradication therapy, the population-based Helicobacter pylori (H pylori) serology screening with eradication therapy and 13C-Urea breath test (UBT) with eradication therapy. METHODS: A Markov model simulation was carried out in all 237 900 Chinese males with age between 35 and 44 from the perspective of the public healthcare provider in Singapore. The main outcome measures were the costs, number of gastric cancer cases prevented, life years saved, and quality-adjusted life years (QALYs) gained from screening age to death. The uncertainty surrounding the cost-effectiveness ratio was addressed by one-way sensitivity analyses. RESULTS: Compared to no screening, the incremental cost-effectiveness ratio (ICER) was $16 166 per life year saved or $13 571 per QALY gained for the serology screening, and $38 792 per life year saved and $32 525 per QALY gained for the UBT. The ICER was $477 079 per life year saved or $390 337 per QALY gained for the UBT compared to the serology screening. The cost-effectiveness of serology screening over the UBT was robust to most parameters in the model. CONCLUSION: The population-based serology screening for H pylori was more cost-effective than the UBT in prevention of gastric cancer in Singapore Chinese males. PMID:18494053
Power law distribution of wealth in population based on a modified Equíluz-Zimmermann model
NASA Astrophysics Data System (ADS)
Xie, Yan-Bo; Wang, Bing-Hong; Hu, Bo; Zhou, Tao
2005-04-01
We propose a money-based model for the power law distribution (PLD) of wealth in an economically interacting population. It is introduced as a modification of the Equíluz and Zimmermann (EZ) model for crowding and information transmission in financial markets. Still, it must be stressed that in the EZ model a PLD without exponential correction is obtained only for a particular parameter, while our pattern will give the exact PLD within a wide range. The PLD exponent depends on the model parameters in a nontrivial way and is exactly calculated in this paper. The numerical results are in excellent agreement with the analytic prediction, and also comparable with empirical data of wealth distribution.
Smooth-Threshold Multivariate Genetic Prediction with Unbiased Model Selection.
Ueki, Masao; Tamiya, Gen
2016-04-01
We develop a new genetic prediction method, smooth-threshold multivariate genetic prediction, using single nucleotide polymorphisms (SNPs) data in genome-wide association studies (GWASs). Our method consists of two stages. At the first stage, unlike the usual discontinuous SNP screening as used in the gene score method, our method continuously screens SNPs based on the output from standard univariate analysis for marginal association of each SNP. At the second stage, the predictive model is built by a generalized ridge regression simultaneously using the screened SNPs with SNP weight determined by the strength of marginal association. Continuous SNP screening by the smooth thresholding not only makes prediction stable but also leads to a closed form expression of generalized degrees of freedom (GDF). The GDF leads to the Stein's unbiased risk estimation (SURE), which enables data-dependent choice of optimal SNP screening cutoff without using cross-validation. Our method is very rapid because computationally expensive genome-wide scan is required only once in contrast to the penalized regression methods including lasso and elastic net. Simulation studies that mimic real GWAS data with quantitative and binary traits demonstrate that the proposed method outperforms the gene score method and genomic best linear unbiased prediction (GBLUP), and also shows comparable or sometimes improved performance with the lasso and elastic net being known to have good predictive ability but with heavy computational cost. Application to whole-genome sequencing (WGS) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) exhibits that the proposed method shows higher predictive power than the gene score and GBLUP methods. PMID:26947266
NASA Astrophysics Data System (ADS)
Meijster, Tim; Warren, Nick; Heederik, Dick; Tielemans, Erik
2009-02-01
Recently a dynamic population model was developed that simulates a population of bakery workers longitudinally through time and tracks the development of work-related sensitisation and respiratory symptoms in each worker. Input for this model comes from cross-sectional and longitudinal epidemiological studies which allowed estimation of exposure response relationships and disease transition probabilities This model allows us to study the development of diseases and transitions between disease states over time in relation to determinants of disease including flour dust and/or allergen exposure. Furthermore it enables more realistic modelling of the health impact of different intervention strategies at the workplace (e.g. changes in exposure may take several years to impact on ill-health and often occur as a gradual trend). A large dataset of individual full-shift exposure measurements and real-time exposure measurements were used to obtain detailed insight into the effectiveness of control measures and other determinants of exposure. Given this information a population wide reduction of the median exposure with 50% was evaluated in this paper.
Hawley, Samuel; Javaid, M. Kassim; Prieto-Alhambra, Daniel; Lippett, Janet; Sheard, Sally; Arden, Nigel K.; Cooper, Cyrus; Judge, Andrew
2016-01-01
Objectives: to evaluate orthogeriatric and nurse-led fracture liaison service (FLS) models of post-hip fracture care in terms of impact on mortality (30 days and 1 year) and second hip fracture (2 years). Setting: Hospital Episode Statistics database linked to Office for National Statistics mortality records for 11 acute hospitals in a region of England. Population: patients aged over 60 years admitted for a primary hip fracture from 2003 to 2013. Methods: each hospital was analysed separately and acted as its own control in a before–after time-series design in which the appointment of an orthogeriatrician or set-up/expansion of an FLS was evaluated. Multivariable Cox regression (mortality) and competing risk survival models (second hip fracture) were used. Fixed effects meta-analysis was used to pool estimates of impact for interventions of the same type. Results: of 33,152 primary hip fracture patients, 1,288 sustained a second hip fracture within 2 years (age and sex standardised proportion of 4.2%). 3,033 primary hip fracture patients died within 30 days and 9,662 died within 1 year (age and sex standardised proportion of 9.5% and 29.8%, respectively). The estimated impact of introducing an orthogeriatrician on 30-day and 1-year mortality was hazard ratio (HR) = 0.73 (95% CI: 0.65–0.82) and HR = 0.81 (CI: 0.75–0.87), respectively. Following an FLS, these associations were as follows: HR = 0.80 (95% CI: 0.71–0.91) and HR = 0.84 (0.77–0.93). There was no significant impact on time to second hip fracture. Conclusions: the introduction and/or expansion of orthogeriatric and FLS models of post-hip fracture care has a beneficial effect on subsequent mortality. No evidence for a reduction in second hip fracture rate was found. PMID:26802076
2011-01-01
Background Given mounting evidence for adverse effects from excess manganese exposure, it is critical to understand host factors, such as genetics, that affect manganese metabolism. Methods Archived blood samples, collected from 332 Mexican women at delivery, were analyzed for manganese. We evaluated associations of manganese with functional variants in three candidate iron metabolism genes: HFE [hemochromatosis], TF [transferrin], and ALAD [δ-aminolevulinic acid dehydratase]. We used a knockout mouse model to parallel our significant results as a novel method of validating the observed associations between genotype and blood manganese in our epidemiologic data. Results Percentage of participants carrying at least one copy of HFE C282Y, HFE H63D, TF P570S, and ALAD K59N variant alleles was 2.4%, 17.7%, 20.1%, and 6.4%, respectively. Percentage carrying at least one copy of either C282Y or H63D allele in HFE gene was 19.6%. Geometric mean (geometric standard deviation) manganese concentrations were 17.0 (1.5) μg/l. Women with any HFE variant allele had 12% lower blood manganese concentrations than women with no variant alleles (β = -0.12 [95% CI = -0.23 to -0.01]). TF and ALAD variants were not significant predictors of blood manganese. In animal models, Hfe-/- mice displayed a significant reduction in blood manganese compared with Hfe+/+ mice, replicating the altered manganese metabolism found in our human research. Conclusions Our study suggests that genetic variants in iron metabolism genes may contribute to variability in manganese exposure by affecting manganese absorption, distribution, or excretion. Genetic background may be critical to consider in studies that rely on environmental manganese measurements. PMID:22074419
No-Impact Threshold Values for NRAP's Reduced Order Models
Last, George V.; Murray, Christopher J.; Brown, Christopher F.; Jordan, Preston D.; Sharma, Maneesh
2013-02-01
The purpose of this study was to develop methodologies for establishing baseline datasets and statistical protocols for determining statistically significant changes between background concentrations and predicted concentrations that would be used to represent a contamination plume in the Gen II models being developed by NRAP’s Groundwater Protection team. The initial effort examined selected portions of two aquifer systems; the urban shallow-unconfined aquifer system of the Edwards-Trinity Aquifer System (being used to develop the ROM for carbon-rock aquifers, and the a portion of the High Plains Aquifer (an unconsolidated and semi-consolidated sand and gravel aquifer, being used to development the ROM for sandstone aquifers). Threshold values were determined for Cd, Pb, As, pH, and TDS that could be used to identify contamination due to predicted impacts from carbon sequestration storage reservoirs, based on recommendations found in the EPA’s ''Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities'' (US Environmental Protection Agency 2009). Results from this effort can be used to inform a ''no change'' scenario with respect to groundwater impacts, rather than the use of an MCL that could be significantly higher than existing concentrations in the aquifer.
Effects of mixing in threshold models of social behavior
NASA Astrophysics Data System (ADS)
Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan
2013-07-01
We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.
OEDGE Modeling of Detachment Threshold Experiments on DIII-D
NASA Astrophysics Data System (ADS)
Elder, J. D.; Stangeby, P. C.; McLean, A. G.; Leonard, A. W.; Watkins, J. G.
2015-11-01
A detachment threshold experiment was performed on DIII-D in which the divertor plasma transitioned from attached to weakly detached at the strike point with minimal changes in upstream parameters. The value of Te at the outer strike point measured by Thompson scattering decreased from ~ 10eV (attached) to ~ 2 eV (weakly detached). Both the Langmuir probes and the divertor Thomson diagnostics recorded increases in the particle flux on the order of a factor of two between these divertor conditions. OEDGE is used to model both of these plasma regimes for both L-mode and H-mode discharges. The behaviour of molecular hydrogen is assessed using OEDGE and possible roles of hydrogen molecules in the detachment process are examined. Work supported by the US Department of Energy under DE-FC02-04ER54698, DE-FG02-04ER54578, DE-AC04-94AL85000, DE-AC05-00OR22725, and DE-AC52-07NA27344.
Modeling of Auditory Neuron Response Thresholds with Cochlear Implants.
Venail, Frederic; Mura, Thibault; Akkari, Mohamed; Mathiolon, Caroline; Menjot de Champfleur, Sophie; Piron, Jean Pierre; Sicard, Marielle; Sterkers-Artieres, Françoise; Mondain, Michel; Uziel, Alain
2015-01-01
The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement), electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device). The electrical response, measured using auto-NRT (neural responses telemetry) algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = -0.11 ± 0.02, P < 0.01), the scalar placement of the electrodes (β = -8.50 ± 1.97, P < 0.01), and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF). Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas. PMID:26236725
Phonation threshold power in ex vivo laryngeal models
Regner, Michael F.; Jiang, Jack J.
2011-01-01
This study hypothesized that phonation threshold power is measureable and sensitive to changes in the biomechanical properties of the vocal folds. Phonation threshold power was measured in three sample populations of ten excised canine larynges treated with variable posterior glottal gap, variable bilateral vocal fold elongation, and variable vocal fold lesioning. Posterior glottal gap was varied from 0 mm to 4 mm in 0.5 mm intervals. Bilateral vocal fold elongation was varied from 0% to 20% in 5% intervals. Vocal fold lesion treatments included unilateral and bilateral vocal fold lesion groups. Each treatment was investigated independently in a sample population of ten excised canine larynges. Linear regression analysis indicated that phonation threshold power was sensitive to posterior glottal gap (R2=0.298, p<0.001) and weakly to vocal fold elongation (R2=0.052, p=0.003). A one-way repeated measures ANOVA indicated that phonation threshold power was sensitive to the presence of lesions (p<0.001). Theoretical and experimental evidence presented here suggests that phonation threshold power could be used as a broad screening parameter, sensitive to certain changes in the biomechanical properties of the larynx. It has not yet been measured in humans, but because it has the potential to represent the airflow-tissue energy transfer more completely the phonation threshold pressure or flow alone, it may be a more useful parameter than these and could be used to indicate that laryngeal health is likely abnormal. PMID:20817475
Comas, Mercè; Arrospide, Arantzazu; Mar, Javier; Sala, Maria; Vilaprinyó, Ester; Hernández, Cristina; Cots, Francesc; Martínez, Juan; Castells, Xavier
2014-01-01
Objective To assess the budgetary impact of switching from screen-film mammography to full-field digital mammography in a population-based breast cancer screening program. Methods A discrete-event simulation model was built to reproduce the breast cancer screening process (biennial mammographic screening of women aged 50 to 69 years) combined with the natural history of breast cancer. The simulation started with 100,000 women and, during a 20-year simulation horizon, new women were dynamically entered according to the aging of the Spanish population. Data on screening were obtained from Spanish breast cancer screening programs. Data on the natural history of breast cancer were based on US data adapted to our population. A budget impact analysis comparing digital with screen-film screening mammography was performed in a sample of 2,000 simulation runs. A sensitivity analysis was performed for crucial screening-related parameters. Distinct scenarios for recall and detection rates were compared. Results Statistically significant savings were found for overall costs, treatment costs and the costs of additional tests in the long term. The overall cost saving was 1,115,857€ (95%CI from 932,147 to 1,299,567) in the 10th year and 2,866,124€ (95%CI from 2,492,610 to 3,239,638) in the 20th year, representing 4.5% and 8.1% of the overall cost associated with screen-film mammography. The sensitivity analysis showed net savings in the long term. Conclusions Switching to digital mammography in a population-based breast cancer screening program saves long-term budget expense, in addition to providing technical advantages. Our results were consistent across distinct scenarios representing the different results obtained in European breast cancer screening programs. PMID:24832200
Eulenburg, Christine; Schroeder, Jennifer; Obi, Nadia; Heinz, Judith; Seibold, Petra; Rudolph, Anja; Chang-Claude, Jenny; Flesch-Janys, Dieter
2016-02-15
We employed a semi-Markov multistate model for the simultaneous analysis of various endpoints describing the course of breast cancer. Results were compared with those from standard analyses using a Cox proportional hazards model. We included 3,012 patients with invasive breast cancer newly diagnosed between 2001 and 2005 who were recruited in Germany for a population-based study, the Mamma Carcinoma Risk Factor Investigation (MARIE Study), and prospectively followed up until the end of 2009. Locoregional recurrence and distant metastasis were included as intermediate states, and deaths from breast cancer, secondary cancer, and other causes were included as competing absorbing states. Tumor characteristics were significantly associated with all breast cancer-related endpoints. Nodal involvement was significantly related to local recurrence but more strongly related to distant metastases. Smoking was significantly associated with mortality from second cancers and other causes, whereas menopausal hormone use was significantly associated with reduced distant metastasis and death from causes other than cancer. The presence of cardiovascular disease at diagnosis was solely associated with mortality from other causes. Compared with separate Cox models, multistate models allow for dissection of prognostic factors and intermediate events in the analysis of cause-specific mortality and can yield new insights into disease progression and associated pathways. PMID:26823437
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Fiber bundle model with highly disordered breaking thresholds.
Roy, Chandreyee; Kundu, Sumanta; Manna, S S
2015-03-01
We present a study of the fiber bundle model using equal load-sharing dynamics where the breaking thresholds of the fibers are drawn randomly from a power-law distribution of the form p(b)∼b-1 in the range 10-β to 10β. Tuning the value of β continuously over a wide range, the critical behavior of the fiber bundle has been studied both analytically as well as numerically. Our results are: (i) The critical load σc(β,N) for the bundle of size N approaches its asymptotic value σc(β) as σc(β,N)=σc(β)+AN-1/ν(β), where σc(β) has been obtained analytically as σc(β)=10β/(2βeln10) for β≥βu=1/(2ln10), and for β<βu the weakest fiber failure leads to the catastrophic breakdown of the entire fiber bundle, similar to brittle materials, leading to σ_{c}(β)=10-β; (ii) the fraction of broken fibers right before the complete breakdown of the bundle has the form 1-1/(2βln10); (iii) the distribution D(Δ) of the avalanches of size Δ follows a power-law D(Δ)∼Δ-ξ with ξ=5/2 for Δ≫Δc(β) and ξ=3/2 for Δ≪Δc(β), where the crossover avalanche size Δc(β)=2/(1-e10-2β)2. PMID:25871050
Laser thresholds in pulp exposure: a rat animal model
NASA Astrophysics Data System (ADS)
White, Joel M.; Goodis, Harold E.; Kudler, Joel J.
1995-05-01
Laser technology is now being clinically investigated for the removal of carious enamel and dentin. This study used an animal model to evaluate histological pulpal effects from laser exposure. The molars of 24 Sprague-Dawley rats (n equals 264) were exposed to either a pulsed 1.06 micrometers Nd:YAG laser (120 microseconds, 320 micrometer diameter fiber), air rotor drill preparation or left untreated as controls. The following treatment conditions were investigated: control group (n equals 54); high speed drill with carbide bur (n equals 39); laser exposure at 50 mJ/p at 10 Hz (n equals 27), 100 mJ/p at 10 Hz (n equals 66) and 100 mJ/p at 20 Hz (n equals 39). A sixth treatment condition was investigated: root surface hypersensitivity, which included incremental laser exposure from 30 to 100 mJ/p at 10 Hz (n equals 39). The animals were euthanized either immediately after treatment, at one week, or at one month. The jaws were fixed and bioprepared. Remaining dentin thickness was measured, and ranged from 0.17 +/- 0.04 mm to 0.35 +/- 0.09 mm. The pulp tissue was examined for histologic inflammatory response. No evidence of pulpal involvement or adverse pulpal effects were found at any time period in teeth receiving 50 mJ/p. When histologic samples were compared with controls, all observations were similar. Of the 210 exposed teeth, 2 teeth receiving 100 mJ/p demonstrated abscess formation and were exfoliated. Further, in the rat molar when remaining dentin thickness was less than 0.5 mm, exposed to 100 mJ/p, threshold pulpal effects occurred. The response of rat pulp to laser exposure indicated no histologically measurable response to pulsed laser energy at 50 mJ/p.
Technology Transfer Automated Retrieval System (TEKTRAN)
Threshold (or regime shift) models are useful for restoration because they match actions to conditions where benefits are likely to be maximized. The procedures by which threshold models should be applied, however, are in the early stages of development. Here, we describe ecological concepts and der...
McLernon, David J; Donnan, Peter T; Sullivan, Frank M; Roderick, Paul; Rosenberg, William M; Ryder, Steve D; Dillon, John F
2014-01-01
Objective To derive and validate a clinical prediction model to estimate the risk of liver disease diagnosis following liver function tests (LFTs) and to convert the model to a simplified scoring tool for use in primary care. Design Population-based observational cohort study of patients in Tayside Scotland identified as having their LFTs performed in primary care and followed for 2 years. Biochemistry data were linked to secondary care, prescriptions and mortality data to ascertain baseline characteristics of the derivation cohort. A separate validation cohort was obtained from 19 general practices across the rest of Scotland to externally validate the final model. Setting Primary care, Tayside, Scotland. Participants Derivation cohort: LFT results from 310 511 patients. After exclusions (including: patients under 16 years, patients having initial LFTs measured in secondary care, bilirubin >35 μmol/L, liver complications within 6 weeks and history of a liver condition), the derivation cohort contained 95 977 patients with no clinically apparent liver condition. Validation cohort: after exclusions, this cohort contained 11 653 patients. Primary and secondary outcome measures Diagnosis of a liver condition within 2 years. Results From the derivation cohort (n=95 977), 481 (0.5%) were diagnosed with a liver disease. The model showed good discrimination (C-statistic=0.78). Given the low prevalence of liver disease, the negative predictive values were high. Positive predictive values were low but rose to 20–30% for high-risk patients. Conclusions This study successfully developed and validated a clinical prediction model and subsequent scoring tool, the Algorithm for Liver Function Investigations (ALFI), which can predict liver disease risk in patients with no clinically obvious liver disease who had their initial LFTs taken in primary care. ALFI can help general practitioners focus referral on a small subset of patients with higher predicted risk
Realo, Anu; Teras, Andero; Kööts-Ausmees, Liisi; Esko, Tõnu; Metspalu, Andres; Allik, Jüri
2015-12-01
The current study examined the relationship between the Five-Factor Model personality traits and physician-confirmed peptic ulcer disease (PUD) diagnosis in a large population-based adult sample, controlling for the relevant behavioral and sociodemographic factors. Personality traits were assessed by participants themselves and by knowledgeable informants using the NEO Personality Inventory-3 (NEO PI-3). When controlling for age, sex, education, and cigarette smoking, only one of the five NEO PI-3 domain scales - higher Neuroticism - and two facet scales - lower A1: Trust and higher C1: Competence - made a small, yet significant contribution (p < 0.01) to predicting PUD in logistic regression analyses. In the light of these relatively modest associations, our findings imply that it is certain behavior (such as smoking) and sociodemographic variables (such as age, gender, and education) rather than personality traits that are associated with the diagnosis of PUD at a particular point in time. Further prospective studies with a longitudinal design and multiple assessments would be needed to fully understand if the FFM personality traits serve as risk factors for the development of PUD. PMID:26437682
Edjolo, Arlette; Proust-Lima, Cécile; Delva, Fleur; Dartigues, Jean-François; Pérès, Karine
2016-02-15
We aimed to describe the hierarchical structure of Instrumental Activities of Daily Living (IADL) and basic Activities of Daily Living (ADL) and trajectories of dependency before death in an elderly population using item response theory methodology. Data were obtained from a population-based French cohort study, the Personnes Agées QUID (PAQUID) Study, of persons aged ≥65 years at baseline in 1988 who were recruited from 75 randomly selected areas in Gironde and Dordogne. We evaluated IADL and ADL data collected at home every 2-3 years over a 24-year period (1988-2012) for 3,238 deceased participants (43.9% men). We used a longitudinal item response theory model to investigate the item sequence of 11 IADL and ADL combined into a single scale and functional trajectories adjusted for education, sex, and age at death. The findings confirmed the earliest losses in IADL (shopping, transporting, finances) at the partial limitation level, and then an overlapping of concomitant IADL and ADL, with bathing and dressing being the earliest ADL losses, and finally total losses for toileting, continence, eating, and transferring. Functional trajectories were sex-specific, with a benefit of high education that persisted until death in men but was only transient in women. An in-depth understanding of this sequence provides an early warning of functional decline for better adaptation of medical and social care in the elderly. PMID:26825927
A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment
ERIC Educational Resources Information Center
Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul
2012-01-01
This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…
Modelling the regulatory system for diabetes mellitus with a threshold window
NASA Astrophysics Data System (ADS)
Yang, Jin; Tang, Sanyi; Cheke, Robert A.
2015-05-01
Piecewise (or non-smooth) glucose-insulin models with threshold windows for type 1 and type 2 diabetes mellitus are proposed and analyzed with a view to improving understanding of the glucose-insulin regulatory system. For glucose-insulin models with a single threshold, the existence and stability of regular, virtual, pseudo-equilibria and tangent points are addressed. Then the relations between regular equilibria and a pseudo-equilibrium are studied. Furthermore, the sufficient and necessary conditions for the global stability of regular equilibria and the pseudo-equilibrium are provided by using qualitative analysis techniques of non-smooth Filippov dynamic systems. Sliding bifurcations related to boundary node bifurcations were investigated with theoretical and numerical techniques, and insulin clinical therapies are discussed. For glucose-insulin models with a threshold window, the effects of glucose thresholds or the widths of threshold windows on the durations of insulin therapy and glucose infusion were addressed. The duration of the effects of an insulin injection is sensitive to the variation of thresholds. Our results indicate that blood glucose level can be maintained within a normal range using piecewise glucose-insulin models with a single threshold or a threshold window. Moreover, our findings suggest that it is critical to individualise insulin therapy for each patient separately, based on initial blood glucose levels.
Reversed thresholds in partial credit models: a reason for collapsing categories?
Wetzel, Eunike; Carstensen, Claus H
2014-12-01
When questionnaire data with an ordered polytomous response format are analyzed in the framework of item response theory using the partial credit model or the generalized partial credit model, reversed thresholds may occur. This led to the discussion of whether reversed thresholds violate model assumptions and indicate disordering of the response categories. Adams, Wu, and Wilson showed that reversed thresholds are merely a consequence of low frequencies in the categories concerned and that they do not affect the order of the rating scale. This article applies an empirical approach to elucidate the topic of reversed thresholds using data from the Revised NEO Personality Inventory as well as a simulation study. It is shown that categories differentiate between participants with different trait levels despite reversed thresholds and that category disordering can be analyzed independently of the ordering of the thresholds. Furthermore, we show that reversed thresholds often only occur in subgroups of participants. Thus, researchers should think more carefully about collapsing categories due to reversed thresholds. PMID:24789857
NASA Astrophysics Data System (ADS)
Si, Xia-Meng; Wang, Wen-Dong; Ma, Yan
2016-06-01
The degree of sentiment is the key factor for internet users in determining their propagating behaviors, i.e. whether participating in a discussion and whether withdrawing from a discussion. For this end, we introduce two sentiment-based propagation thresholds (i.e. infected threshold and refractory threshold) and propose an interacting model based on the Bayesian updating rules. Our model describe the phenomena that few internet users change their decisions and that someone has drop out of discussion about the topic when some others are just aware of it. Numerical simulations show that, large infected threshold restrains information diffusion but favors the lessening of extremism, while large refractory threshold facilitates decision interaction but promotes the extremism. Making netizens calm down and propagate information sanely can restrain the prevailing of extremism about rumors.
Librero, Julián; Sanfélix-Gimeno, Gabriel; Peiró, Salvador
2016-01-01
Objective To identify adherence patterns over time and their predictors for evidence-based medications used after hospitalization for coronary heart disease (CHD). Patients and Methods We built a population-based retrospective cohort of all patients discharged after hospitalization for CHD from public hospitals in the Valencia region (Spain) during 2008 (n = 7462). From this initial cohort, we created 4 subcohorts with at least one prescription (filled or not) from each therapeutic group (antiplatelet, beta-blockers, ACEI/ARB, statins) within the first 3 months after discharge. Monthly adherence was defined as having ≥24 days covered out of 30, leading to a repeated binary outcome measure. We assessed the membership to trajectory groups of adherence using group-based trajectory models. We also analyzed predictors of the different adherence patterns using multinomial logistic regression. Results We identified a maximum of 5 different adherence patterns: 1) Nearly-always adherent patients; 2) An early gap in adherence with a later recovery; 3) Brief gaps in medication use or occasional users; 4) A slow decline in adherence; and 5) A fast decline. These patterns represented variable proportions of patients, the descending trajectories being more frequent for the beta-blocker and ACEI/ARB cohorts (16% and 17%, respectively) than the antiplatelet and statin cohorts (10% and 8%, respectively). Predictors of poor or intermediate adherence patterns were having a main diagnosis of unstable angina or other forms of CHD vs. AMI in the index hospitalization, being born outside Spain, requiring copayment or being older. Conclusion Distinct adherence patterns over time and their predictors were identified. This may be a useful approach for targeting improvement interventions in patients with poor adherence patterns. PMID:27551748
A phenomenological model on the kink mode threshold varying with the inclination of sheath boundary
Sun, X.; Intrator, T. P.; Sears, J.; Weber, T.; Liu, M.
2013-11-15
In nature and many laboratory plasmas, a magnetic flux tube threaded by current or a flux rope has a footpoint at a boundary. The current driven kink mode is one of the fundamental ideal magnetohydrodynamic instabilities in plasmas. It has an instability threshold that has been found to strongly depend on boundary conditions (BCs). We provide a theoretical model to explain the transition of this threshold dependence between nonline tied and line tied boundary conditions. We evaluate model parameters using experimentally measured plasma data, explicitly verify several kink eigenfunctions, and validate the model predictions for boundary conditions BCs that span the range between NLT and LT BCs. Based on this model, one could estimate the kink threshold given knowledge of the displacement of a flux rope end, or conversely estimate flux rope end motion based on knowledge of it kink stability threshold.
Two-threshold model for scaling laws of noninteracting snow avalanches
Faillettaz, J.; Louchet, F.; Grasso, J.-R.
2004-01-01
A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.
The threshold of a stochastic delayed SIR epidemic model with temporary immunity
NASA Astrophysics Data System (ADS)
Liu, Qun; Chen, Qingmei; Jiang, Daqing
2016-05-01
This paper is concerned with the asymptotic properties of a stochastic delayed SIR epidemic model with temporary immunity. Sufficient conditions for extinction and persistence in the mean of the epidemic are established. The threshold between persistence in the mean and extinction of the epidemic is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.
The Rasch Rating Model and the Disordered Threshold Controversy
ERIC Educational Resources Information Center
Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark
2012-01-01
The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…
Jauregui, Cesar; Otto, Hans-Jürgen; Stutzki, F; Limpert, J; Tünnermann, A
2015-08-10
In this paper we present a simple model to predict the behavior of the transversal mode instability threshold when different parameters of a fiber amplifier system are changed. The simulation model includes an estimation of the photodarkening losses which shows the strong influence that this effect has on the mode instability threshold and on its behavior. Comparison of the simulation results with experimental measurements reveal that the mode instability threshold in a fiber amplifier system is reached for a constant average heat load value in good approximation. Based on this model, the expected behavior of the mode instability threshold when changing the seed wavelength, the seed power and/or the fiber length will be presented and discussed. Additionally, guidelines for increasing the average power of fiber amplifier systems will be provided. PMID:26367877
The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide
Folly, Walter Sydney Dutra
2011-01-01
Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431
Immunization and epidemic threshold of an SIS model in complex networks
NASA Astrophysics Data System (ADS)
Wu, Qingchu; Fu, Xinchu
2016-02-01
We propose an improved mean-field model to investigate immunization strategies of an SIS model in complex networks. Unlike the traditional mean-field approach, the improved model utilizes the degree information of before and after the immunization. The epidemic threshold of degree-correlated networks can be obtained by linear stability analysis. For degree-uncorrelated networks, the model is reduced to the SIS epidemic model in networks after removing the immunized nodes. Compared to the previous results of random and targeted immunization schemes on degree-uncorrelated networks, we find that the infectious disease has a lower epidemic threshold.
Epidemic threshold of node-weighted susceptible-infected-susceptible models on networks
NASA Astrophysics Data System (ADS)
Wu, Qingchu; Zhang, Haifeng
2016-08-01
In this paper, we investigate the epidemic spreading on random and regular networks through a pairwise-type model with a general transmission rate to evaluate the influence of the node-weight distribution. By using block matrix theory, an epidemic threshold index is formulated to predict the epidemic outbreak. An upper bound of the epidemic threshold is obtained by analyzing the monotonicity of spectral radius for nonnegative matrices. Theoretical results suggest that the epidemic threshold is dependent on both matrices {H}(1) and {H}(2) with the first matrix being related to the mean-field model while the second one reflecting the heterogeneous transmission rates. In particular, for a linear transmission rate, this study shows the negative correlation between the heterogeneity of weight distribution and the epidemic threshold, which is different from the results for existing results from the edge-weighted networks.
Buzatto, Bruno A; Buoro, Mathieu; Hazel, Wade N; Tomkins, Joseph L
2015-12-22
The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a 'half-sib common environment' and a 'family-level split environment' experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic 'proximate' cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions. PMID:26674955
A two-step framework for over-threshold modelling of environmental extremes
NASA Astrophysics Data System (ADS)
Bernardara, P.; Mazas, F.; Kergadallan, X.; Hamm, L.
2014-03-01
The evaluation of the probability of occurrence of extreme natural events is important for the protection of urban areas, industrial facilities and others. Traditionally, the extreme value theory (EVT) offers a valid theoretical framework on this topic. In an over-threshold modelling (OTM) approach, Pickands' theorem, (Pickands, 1975) states that, for a sample composed by independent and identically distributed (i.i.d.) values, the distribution of the data exceeding a given threshold converges through a generalized Pareto distribution (GPD). Following this theoretical result, the analysis of realizations of environmental variables exceeding a threshold spread widely in the literature. However, applying this theorem to an auto-correlated time series logically involves two successive and complementary steps: the first one is required to build a sample of i.i.d. values from the available information, as required by the EVT; the second to set the threshold for the optimal convergence toward the GPD. In the past, the same threshold was often employed both for sampling observations and for meeting the hypothesis of extreme value convergence. This confusion can lead to an erroneous understanding of methodologies and tools available in the literature. This paper aims at clarifying the conceptual framework involved in threshold selection, reviewing the available methods for the application of both steps and illustrating it with a double threshold approach.
Modeling the Threshold Wind Speed for Saltation Initiation over Heterogeneous Sand Beds
NASA Astrophysics Data System (ADS)
Turney, F. A.; Martin, R. L.; Kok, J. F.
2015-12-01
Initiation of aeolian sediment transport is key to understanding the formation of dunes, emission of dust into the atmosphere, and landscape erosion. Previous models of the threshold wind speed required for saltation initiation have assumed that the particle bed is monodisperse and homogeneous in arrangement, thereby ignoring what is in reality a distribution of particle lifting thresholds, influenced by variability in soil particle sizes and bed geometry. To help overcome this problem, we present a numerical model that determines the distribution of threshold wind speeds required for particle lifting for a given soil size distribution. The model results are evaluated against high frequency wind speed and saltation data from a recent field campaign in Oceano Dunes in Southern California. The results give us insight into the range of lifting thresholds present during incipient sediment transport and the simplifications that are often made to characterize the process. In addition, this study provides a framework for moving beyond the 'fluid threshold' paradigm, which is known to be inaccurate, especially for near-threshold conditions.
NASA Astrophysics Data System (ADS)
Kahl, M.; Morgan, D. J.; Viccaro, M.; Dingwell, D. B.
2015-12-01
The March-July eruption of Mt. Etna in 1669 is ranked as one of the most destructive and voluminous eruptions of Etna volcano in historical times. To assess threats from future eruptions, a better understanding of how and over what timescales magma moved underground prior to and during the 1669 eruption is required. We present a combined population based and kinetic modelling approach [1-2] applied to 185 olivine crystals that erupted during the 1669 eruption. By means of this approach we provide, for the first time, a dynamic picture of magma mixing and magma migration activity prior to and during the 1669 flank eruption of Etna volcano. Following the work of [3] we have studied 10 basaltic lava samples (five SET1 and five SET2 samples) that were erupted from different fissures that opened between 950 and 700 m a.s.l. Following previous work [1-2] we were able to classify different populations of olivine based on their overall core and rim compositional record and the prevalent zoning type (i.e. normal vs. reverse). The core plateau compositions of the SET1 and SET2 olivines range from Fo70 up to Fo83 with a single peak at Fo75-76. The rims differ significantly and can be distinguished into two different groups. Olivine rims from the SET1 samples are generally more evolved and range from Fo50 to Fo64 with a maximum at Fo55-57. SET2 olivine rims vary between Fo65-75 with a peak at Fo69. SET1 and SET2 olivines display normal zonation with cores at Fo75-76 and diverging rim records (Fo55-57 and Fo65-75). The diverging core and rim compositions recorded in the SET1 and SET2 olivines can be attributed to magma evolution possibly in three different magmatic environments (MEs): M1 (=Fo75-76), M2 (=Fo69) and M3 (=Fo55-57) with magma transfer and mixing amongst them. The MEs established in this study differ slightly from those identified in previous works [1-2]. We note the relative lack of olivines with Fo-rich core and rim compositions indicating a major mafic magma
Mørch, Carsten Dahl; Hennings, Kristian; Andersen, Ole Kæseler
2011-04-01
Electrical stimulation of cutaneous tissue through surface electrodes is an often used method for evoking experimental pain. However, at painful intensities both non-nociceptive Aβ-fibers and nociceptive Aδ- and C-fibers may be activated by the electrical stimulation. This study proposes a finite element (FE) model of the extracellular potential and stochastic branching fiber model of the afferent fiber excitation thresholds. The FE model described four horizontal layers; stratum corneum, epidermis, dermis, and hypodermal used to estimate the excitation threshold of Aβ-fibers terminating in dermis and Aδ-fibers terminating in epidermis. The perception thresholds of 11 electrodes with diameters ranging from 0.2 to 20 mm were modeled and assessed on the volar forearm of healthy human volunteers by an adaptive two-alternative forced choice algorithm. The model showed that the magnitude of the current density was highest for smaller electrodes and decreased through the skin. The excitation thresholds of the Aδ-fibers were lower than the excitation thresholds of Aβ-fibers when current was applied through small, but not large electrodes. The experimentally assessed perception threshold followed the lowest excitation threshold of the modeled fibers. The model confirms that preferential excitation of Aδ-fibers may be achieved by small electrode stimulation due to higher current density in the dermoepidermal junction. PMID:21207174
Postscript: Parallel Distributed Processing in Localist Models without Thresholds
ERIC Educational Resources Information Center
Plaut, David C.; McClelland, James L.
2010-01-01
The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…
Threshold behaviour of a SIR epidemic model with age structure and immigration.
Franceschetti, Andrea; Pugliese, Andrea
2008-07-01
We consider a SIR age-structured model with immigration of infectives in all epidemiological compartments; the population is assumed to be in demographic equilibrium between below-replacement fertility and immigration; the spread of the infection occurs through a general age-dependent kernel. We analyse the equations for steady states; because of immigration of infectives a steady state with a positive density of infectives always exists; however, a quasi-threshold theorem is proved, in the sense that, below the threshold, the density of infectives is close to 0, while it is away from 0, above the threshold; furthermore, conditions that guarantee uniqueness of steady states are obtained. Finally, we present some numerical examples, inspired by the Italian demographic situation, that illustrate the threshold-like behaviour, and other features of the stationary solutions and of the transient. PMID:17985131
Siegel, Jeffry A; Welsh, James S
2016-04-01
In the past several years, there has been a great deal of attention from the popular media focusing on the alleged carcinogenicity of low-dose radiation exposures received by patients undergoing medical imaging studies such as X-rays, computed tomography scans, and nuclear medicine scintigraphy. The media has based its reporting on the plethora of articles published in the scientific literature that claim that there is "no safe dose" of ionizing radiation, while essentially ignoring all the literature demonstrating the opposite point of view. But this reported "scientific" literature in turn bases its estimates of cancer induction on the linear no-threshold hypothesis of radiation carcinogenesis. The use of the linear no-threshold model has yielded hundreds of articles, all of which predict a definite carcinogenic effect of any dose of radiation, regardless of how small. Therefore, hospitals and professional societies have begun campaigns and policies aiming to reduce the use of certain medical imaging studies based on perceived risk:benefit ratio assumptions. However, as they are essentially all based on the linear no-threshold model of radiation carcinogenesis, the risk:benefit ratio models used to calculate the hazards of radiological imaging studies may be grossly inaccurate if the linear no-threshold hypothesis is wrong. Here, we review the myriad inadequacies of the linear no-threshold model and cast doubt on the various studies based on this overly simplistic model. PMID:25824269
Neural masking by sub-threshold electric stimuli: animal and computer model results.
Miller, Charles A; Woo, Jihwan; Abbas, Paul J; Hu, Ning; Robinson, Barbara K
2011-04-01
Electric stimuli can prosthetically excite auditory nerve fibers to partially restore sensory function to individuals impaired by profound or severe hearing loss. While basic response properties of electrically stimulated auditory nerve fibers (ANF) are known, responses to complex, time-changing stimuli used clinically are inadequately understood. We report that forward-masker pulse trains can enhance and reduce ANF responsiveness to subsequent stimuli and the novel observation that sub-threshold (nonspike-evoking) electric trains can reduce responsiveness to subsequent pulse-train stimuli. The effect is observed in the responses of cat ANFs and shown by a computational biophysical ANF model that simulates rate adaptation through integration of external potassium cation (K) channels. Both low-threshold (i.e., Klt) and high-threshold (Kht) channels were simulated at each node of Ranvier. Model versions without Klt channels did not produce the sub-threshold effect. These results suggest that some such accumulation mechanism, along with Klt channels, may underlie sub-threshold masking observed in cat ANF responses. As multichannel auditory prostheses typically present sub-threshold stimuli to various ANF subsets, there is clear relevance of these findings to clinical situations. PMID:21080206
Modeling soil quality thresholds to ecosystem recovery at Fort Benning, GA, USA
Garten Jr, Charles T; Ashwood, Tom L
2004-12-01
The objective of this research was to use a simple model of soil carbon (C) and nitrogen (N) dynamics to predict nutrient thresholds to ecosystem recovery on degraded soils at Fort Benning, Georgia, in the southeastern USA. Artillery, wheeled, and tracked vehicle training at military installations can produce soil disturbance and potentially create barren, degraded soils. Ecosystem reclamation is an important component of natural resource management at military installations. Four factors were important to the development of thresholds to recovery of aboveground biomass on degraded soils: (1) initial amounts of aboveground biomass, (2) initial soil C stocks (i.e., soil quality), (3) relative recovery rates of biomass, and (4) soil sand content. Forests and old fields on soils with varying sand content had different predicted thresholds for ecosystem recovery. Soil C stocks at barren sites on Fort Benning were generally below predicted thresholds to 100% recovery of desired future ecosystem conditions defined on the basis of aboveground biomass. Predicted thresholds to ecosystem recovery were less on soils with more than 70% sand content. The lower thresholds for old field and forest recovery on more sandy soils were apparently due to higher relative rates of net soil N mineralization. Calculations with the model indicated that a combination of desired future conditions, initial levels of soil quality (defined by soil C stocks), and the rate of biomass accumulation determine the predicted success of ecosystem recovery on disturbed soils.
Threshold conditions for SIS epidemic models on edge-weighted networks
NASA Astrophysics Data System (ADS)
Wu, Qingchu; Zhang, Fei
2016-07-01
We consider the disease dynamics of a susceptible-infected-susceptible model in weighted random and regular networks. By using the pairwise approximation, we build an edge-based compartment model, from which the condition of epidemic outbreak is obtained. Our results suggest that there exists a remarkable difference between the linear and nonlinear transmission rate. For a linear transmission rate, the epidemic threshold is completely determined by the mean weight, which is different from the susceptible-infected-recovered model framework. While for a nonlinear transmission rate, the epidemic threshold is not only related to the mean weight, but also closely related to the heterogeneity of weight distribution.
A physical-based pMOSFETs threshold voltage model including the STI stress effect
NASA Astrophysics Data System (ADS)
Wei, Wu; Gang, Du; Xiaoyan, Liu; Lei, Sun; Jinfeng, Kang; Ruqi, Han
2011-05-01
The physical threshold voltage model of pMOSFETs under shallow trench isolation (STI) stress has been developed. The model is verified by 130 nm technology layout dependent measurement data. The comparison between pMOSFET and nMOSFET model simulations due to STI stress was conducted to show that STI stress induced less threshold voltage shift and more mobility shift for the pMOSFET. The circuit simulations of a nine stage ring oscillator with and without STI stress proved about 11% improvement of average delay time. This indicates the importance of STI stress consideration in circuit design.
Study on the threshold of a stochastic SIR epidemic model and its extensions
NASA Astrophysics Data System (ADS)
Zhao, Dianli
2016-09-01
This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.
Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A
2010-10-11
We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave. PMID:20941134
Modeling Soil Quality Thresholds to Ecosystem Recovery at Fort Benning, Georgia, USA
Garten Jr., C.T.
2004-03-08
The objective of this research was to use a simple model of soil C and N dynamics to predict nutrient thresholds to ecosystem recovery on degraded soils at Fort Benning, Georgia, in the southeastern USA. The model calculates aboveground and belowground biomass, soil C inputs and dynamics, soil N stocks and availability, and plant N requirements. A threshold is crossed when predicted soil N supplies fall short of predicted N required to sustain biomass accrual at a specified recovery rate. Four factors were important to development of thresholds to recovery: (1) initial amounts of aboveground biomass, (2) initial soil C stocks (i.e., soil quality), (3) relative recovery rates of biomass, and (4) soil sand content. Thresholds to ecosystem recovery predicted by the model should not be interpreted independent of a specified recovery rate. Initial soil C stocks influenced the predicted patterns of recovery by both old field and forest ecosystems. Forests and old fields on soils with varying sand content had different predicted thresholds to recovery. Soil C stocks at barren sites on Fort Benning generally lie below predicted thresholds to 100% recovery of desired future ecosystem conditions defined on the basis of aboveground biomass (18000 versus 360 g m{sup -2} for forests and old fields, respectively). Calculations with the model indicated that reestablishment of vegetation on barren sites to a level below the desired future condition is possible at recovery rates used in the model, but the time to 100% recovery of desired future conditions, without crossing a nutrient threshold, is prolonged by a reduced rate of forest growth. Predicted thresholds to ecosystem recovery were less on soils with more than 70% sand content. The lower thresholds for old field and forest recovery on more sandy soils are apparently due to higher relative rates of net soil N mineralization in more sandy soils. Calculations with the model indicate that a combination of desired future
Modeling the effect of humidity on the threshold friction velocity of coal particles
NASA Astrophysics Data System (ADS)
Zhang, Xiaochun; Chen, Weiping; Ma, Chun; Zhan, Shuifen
2012-09-01
Coal particles emission could cause serious air pollution in coal production region and transport region. In coal mining industry, large amounts of water are regularly spayed to coal piles to prevent dust emission from the coal particles. The mechanism behind this measure is to manage the threshold friction velocity, which is an important parameter in controlling wind erosion and dust emission. Bagnold has developed a threshold friction velocity model for soil particles. However, the Bagnold model cannot be applied directly to coal particles as coal particles are quite different from soils in physical and chemical properties. We studied and modeled threshold friction velocity of coal particles under different humidities by using a wind tunnel. Results showed that the effects of humidity on coal particles' threshold friction velocity are related to the hydrophilic effect and adhesive effect. Bagnold model can be corrected by two new parameter items which explained the two effects. The new model, agreed well with wind tunnel measurements for coal particles with different size categories. Despite the fact the new model was developed for coal particles, its physical basis may allow the model application to other wind susceptible particles.
Complex Dynamic Thresholds and Generation of the Action Potentials in the Neural-Activity Model
NASA Astrophysics Data System (ADS)
Kirillov, S. Yu.; Nekorkin, V. I.
2016-06-01
This work is devoted to studying the processes of activation of the neurons whose excitation thresholds are not constant and vary in time (the so-called dynamic thresholds). The neuron dynamics is described by the FitzHugh-Nagumo model with nonlinear behavior of the recovery variable. The neuron response to the external pulsed activating action in the presence of a slowly varying synaptic current is studied within the framework of this model. The structure of the dynamic threshold is studied and its properties depending on the external-action parameters are established. It is found that the formation of the "folds" in the separatrix threshold manifold in the model phase space is a typical feature of the complex dynamic threshold. High neuron sensitivity to the action of the comparatively weak slow control signals is established. This explains the capability of the neurons to perform flexible tuning of their selective properties for detecting various external signals in sufficiently short times (of the order of duration of several spikes).
Complex Dynamic Thresholds and Generation of the Action Potentials in the Neural-Activity Model
NASA Astrophysics Data System (ADS)
Kirillov, S. Yu.; Nekorkin, V. I.
2016-05-01
This work is devoted to studying the processes of activation of the neurons whose excitation thresholds are not constant and vary in time (the so-called dynamic thresholds). The neuron dynamics is described by the FitzHugh-Nagumo model with nonlinear behavior of the recovery variable. The neuron response to the external pulsed activating action in the presence of a slowly varying synaptic current is studied within the framework of this model. The structure of the dynamic threshold is studied and its properties depending on the external-action parameters are established. It is found that the formation of the "folds" in the separatrix threshold manifold in the model phase space is a typical feature of the complex dynamic threshold. High neuron sensitivity to the action of the comparatively weak slow control signals is established. This explains the capability of the neurons to perform flexible tuning of their selective properties for detecting various external signals in sufficiently short times (of the order of duration of several spikes).
A new analytical threshold voltage model of cylindrical gate tunnel FET (CG-TFET)
NASA Astrophysics Data System (ADS)
Dash, S.; Mishra, G. P.
2015-10-01
The cylindrical gate tunnel FET (CG-TFET) is one of the potential candidates for future nano-technology, as it exhibit greater scaling capability and low subthreshold swing (SS) as compared to conventional MOSFET. In this paper, a new analytical approach is proposed to extract the gate dependent threshold voltage for CG-TFET. The potential distribution and electric field distribution in the cylindrical channel has been obtained using the 2-D Poisson's equation which in turn computes the shortest tunneling distance and tunneling current. The threshold voltage is extracted using peak transconductance change method based on the saturation of tunneling barrier width. The impact of scaling of effective oxide thickness, cylindrical pillar diameter and gate length on the threshold voltage has been investigated. The consistency of the proposed model is validated with the TCAD simulated results. The present model can be a handful for the study of switching behavior of TFET.
Thresholds in vegetation responses to drought: Implications for rainfall-runoff modeling
NASA Astrophysics Data System (ADS)
Tague, C.; Dugger, A. L.
2011-12-01
While threshold behavior is often associated with soil and subsurface runoff generation, dynamic vegetation responses to water stress may be an important contributor to threshold type behavior in rainfall runoff models. Vegetation water loss varies with vegetation type and biomass and transpiration dynamics in many settings are regulated by stomatal function. In water limited environments the timing and frequency of stomatal closure varies from year to year as a function of water stress. Stomatal closure and associated fine time scale (hourly to weekly) plant transpiration may appear as threshold (on/off) behavior. Total seasonal to annual plant water use, however, typically show a continuous relationship with atmospheric conditions and soil moisture. Thus while short-time scale behavior may demonstrate non-linear, threshold type behavior, continuous relationships at slightly longer time scales can be used to capture the role of vegetation mediated water loss and its associated impact on storage and runoff. Many rainfall runoff models rely on these types of relationships. However these relationships may change if water stress influences vegetation structure as it does in drought conditions. Forest dieback under drought is a dramatic example of a threshold event, and one that is expected to occur with increasing frequency under a warmer climate. Less dramatic but still important are changes in leaf and root biomass in response to drought. We demonstrate these effects using a coupled ecosystem carbon cycling and hydrology model and show that by accounting for drought driven changes in vegetation dynamics we improve our ability to capture inter-annual variation in streamflow for a semi-arid watershed in New Mexico. We also use the model to predict spatial patterns of more catastrophic vegetation dieback with moisture stress and show that we can accurately capture the spatial pattern of ponderosa pine dieback during a early 2000s drought in New Mexico. We use these
On the physical meaning of hillslope thresholds: A combined field-modeling analysis
NASA Astrophysics Data System (ADS)
Graham, C. B.; McDonnell, J. J.
2008-12-01
Near surface lateral subsurface flow has been shown to be a major component of streamflow in many upland humid areas. Nevertheless, efforts to derive macroscale understanding have proven difficult, often due to the baffling degree of heterogeneity of hillslope scale soil, geologic and hydraulic characteristics. One common finding on gauged hillslopes is a threshold response of subsurface stormflow to total storm precipitation. These thresholds have been attributed to several mechanisms, but increasingly, it appears that such threshold response in areas with strong permeability contrast between soil and bedrock relates to the filling and spilling of connected subsurface saturated patches. Additionally, antecedent moisture conditions appear to have an impact linked to the general soil moisture deficit in the soil profile. Here, we describe a combined field- modeling study at the Maimai, NZ, experimental hillslope. We present a simple reservoir-based model based on hillslope excavations aimed at quantifying subsurface flow paths and processes, and bedrock microtopography. We perform a multiple objective calibration incorporating hydrograph, tracer and internal state response to demonstrate the model is capturing observed behavior at the site. We then present a series of virtual experiments using the calibrated model to examine the relative influence of fill and spill (bedrock leakage and subsurface storage) and soil moisture deficit (PET and storm spacing) factors on threshold development and magnitude. Overall, our work suggests that whole-hillslope thresholds are balanced by fill and spill and soil moisture deficit: at Maimai where rainfall is high and evenly distributed annually, fill and spill factors dominate. In climate regimes where storm spacing is variable and potential evaporation rates are high, our virtual experiments suggest that soil moisture deficit factors would dominate the threshold response.
A probabilistic model of absolute auditory thresholds and its possible physiological basis.
Heil, Peter; Neubauer, Heinrich; Tetschke, Manuel; Irvine, Dexter R F
2013-01-01
Detection thresholds for auditory stimuli, specified in terms of their -amplitude or level, depend on the stimulus temporal envelope and decrease with increasing stimulus duration. The neural mechanisms underlying these fundamental across-species observations are not fully understood. Here, we present a "continuous look" model, according to which the stimulus gives rise to stochastic neural detection events whose probability of occurrence is proportional to the 3rd power of the low-pass filtered, time-varying stimulus amplitude. Threshold is reached when a criterion number of events have occurred (probability summation). No long-term integration is required. We apply the model to an extensive set of thresholds measured in humans for tones of different envelopes and durations and find it to fit well. Subtle differences at long durations may be due to limited attention resources. We confirm the probabilistic nature of the detection events by analyses of simple reaction times and verify the exponent of 3 by validating model predictions for binaural thresholds from monaural thresholds. The exponent originates in the auditory periphery, possibly in the intrinsic Ca(2+) cooperativity of the Ca(2+) sensor involved in exocytosis from inner hair cells. It results in growth of the spike rate of auditory-nerve fibers (ANFs) with the 3rd power of the stimulus amplitude before saturating (Heil et al., J Neurosci 31:15424-15437, 2011), rather than with its square (i.e., with stimulus intensity), as is commonly assumed. Our work therefore suggests a link between detection thresholds and a key biochemical reaction in the receptor cells. PMID:23716205
The threshold of a stochastic SIVS epidemic model with nonlinear saturated incidence
NASA Astrophysics Data System (ADS)
Zhao, Dianli; Zhang, Tiansi; Yuan, Sanling
2016-02-01
A stochastic version of the SIS epidemic model with vaccination (SIVS) is studied. When the noise is small, the threshold parameter is identified, which determines the extinction and persistence of the epidemic. Besides, the results show that large noise will suppress the epidemic from prevailing regardless of the saturated incidence. The results are illustrated by computer simulations.
NESTED THRESHOLD SIRE MODELS FOR ESTIMATING GENETIC PARAMETERS FOR STAYABILITY IN BEEF COWS
Technology Transfer Automated Retrieval System (TEKTRAN)
Stayability is the ability of a beef cow to remain in production to a specified age. In this study, the interest was in determining the genetic relationship between stayability to an early age with the stayability to a later age. A nested threshold sire model for stayability was used to estimate t...
Technology Transfer Automated Retrieval System (TEKTRAN)
A sire-maternal grandsire threshold model was used for genetic evaluation of stillbirth in U.S. Holsteins. Calving ease and stillbirth records for herds reporting at least 10 dead calves were extracted from the AIPL database. About half of the 14 million calving ease records in the database have a k...
Threshold Values for Identification of Contamination Predicted by Reduced-Order Models
Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; Brown, Christopher F.
2014-12-31
The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.
Codimension-1 Sliding Bifurcations of a Filippov Pest Growth Model with Threshold Policy
NASA Astrophysics Data System (ADS)
Tang, Sanyi; Tang, Guangyao; Qin, Wenjie
A Filippov system is proposed to describe the stage structured nonsmooth pest growth with threshold policy control (TPC). The TPC measure is represented by the total density of both juveniles and adults being chosen as an index for decisions on when to implement chemical control strategies. The proposed Filippov system can have three pieces of sliding segments and three pseudo-equilibria, which result in rich sliding mode bifurcations and local sliding bifurcations including boundary node (boundary focus, or boundary saddle) and tangency bifurcations. As the threshold density varies the model exhibits the interesting global sliding bifurcations sequentially: touching → buckling → crossing → sliding homoclinic orbit to a pseudo-saddle → crossing → touching bifurcations. In particular, bifurcation of a homoclinic orbit to a pseudo-saddle with a figure of eight shape, to a pseudo-saddle-node or to a standard saddle-node have been observed for some parameter sets. This implies that control outcomes are sensitive to the threshold level, and hence it is crucial to choose the threshold level to initiate control strategy. One more sliding segment (or pseudo-equilibrium) is induced by the total density of a population guided switching policy, compared to only the juvenile density guided policy, implying that this control policy is more effective in terms of preventing multiple pest outbreaks or causing the density of pests to stabilize at a desired level such as an economic threshold.
Predicting the epidemic threshold of the susceptible-infected-recovered model
Wang, Wei; Liu, Quan-Hui; Zhong, Lin-Feng; Tang, Ming; Gao, Hui; Stanley, H. Eugene
2016-01-01
Researchers have developed several theoretical methods for predicting epidemic thresholds, including the mean-field like (MFL) method, the quenched mean-field (QMF) method, and the dynamical message passing (DMP) method. When these methods are applied to predict epidemic threshold they often produce differing results and their relative levels of accuracy are still unknown. We systematically analyze these two issues—relationships among differing results and levels of accuracy—by studying the susceptible-infected-recovered (SIR) model on uncorrelated configuration networks and a group of 56 real-world networks. In uncorrelated configuration networks the MFL and DMP methods yield identical predictions that are larger and more accurate than the prediction generated by the QMF method. As for the 56 real-world networks, the epidemic threshold obtained by the DMP method is more likely to reach the accurate epidemic threshold because it incorporates full network topology information and some dynamical correlations. We find that in most of the networks with positive degree-degree correlations, an eigenvector localized on the high k-core nodes, or a high level of clustering, the epidemic threshold predicted by the MFL method, which uses the degree distribution as the only input information, performs better than the other two methods. PMID:27091705
Predicting the epidemic threshold of the susceptible-infected-recovered model
NASA Astrophysics Data System (ADS)
Wang, Wei; Liu, Quan-Hui; Zhong, Lin-Feng; Tang, Ming; Gao, Hui; Stanley, H. Eugene
2016-04-01
Researchers have developed several theoretical methods for predicting epidemic thresholds, including the mean-field like (MFL) method, the quenched mean-field (QMF) method, and the dynamical message passing (DMP) method. When these methods are applied to predict epidemic threshold they often produce differing results and their relative levels of accuracy are still unknown. We systematically analyze these two issues—relationships among differing results and levels of accuracy—by studying the susceptible-infected-recovered (SIR) model on uncorrelated configuration networks and a group of 56 real-world networks. In uncorrelated configuration networks the MFL and DMP methods yield identical predictions that are larger and more accurate than the prediction generated by the QMF method. As for the 56 real-world networks, the epidemic threshold obtained by the DMP method is more likely to reach the accurate epidemic threshold because it incorporates full network topology information and some dynamical correlations. We find that in most of the networks with positive degree-degree correlations, an eigenvector localized on the high k-core nodes, or a high level of clustering, the epidemic threshold predicted by the MFL method, which uses the degree distribution as the only input information, performs better than the other two methods.
Lucero, Jorge C; Van Hirtum, Annemie; Ruty, Nicolas; Cisonni, Julien; Pelorson, Xavier
2009-02-01
This paper analyzes the capability of a mucosal wave model of the vocal fold to predict values of phonation threshold lung pressure. Equations derived from the model are fitted to pressure data collected from a mechanical replica of the vocal folds. The results show that a recent extension of the model to include an arbitrary delay of the mucosal wave in its travel along the glottal channel provides a better approximation to the data than the original version of the model, which assumed a small delay. They also show that modeling the vocal tract as a simple inertive load, as has been proposed in recent analytical studies of phonation, fails to capture the effect of the vocal tract on the phonation threshold pressure with reasonable accuracy. PMID:19206840
Research of adaptive threshold model and its application in iris tracking
NASA Astrophysics Data System (ADS)
Zhao, Qijie; Tu, Dawei; Wang, Rensan; Gao, Daming
2005-02-01
The relationship between gray value of pixels and macro-information in image has been analyzed with the method in statistical mechanics. After simulating and curve fitting with the experiment data by statistic and regression method, an adaptive threshold model between average gray value and image threshold has been proposed in terms of Boltzmann statistics. On the other hand, the image characteristics around the eye region and the states of eyeball also have been analyzed, and an algorithm to extract the eye feature and locate its position on the image has been proposed, furthermore, another algorithm has been proposed to find the iris characteristic line and then to coordinate the iris center. At last, considering the cases of head gesture, different head position, and the opening state of eyes, some experiments have been respectively done with the function based on the adaptive threshold model and the designed algorithms in eye-gaze input human-computer interaction (HCI) system. The experiment results show that the algorithms can widely be applied in different cases, and real-time iris tracking can be performed with the adaptive threshold model and algorithms.
Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection
NASA Astrophysics Data System (ADS)
Naveau, Philippe; Huser, Raphael; Ribereau, Pierre; Hannart, Alexis
2016-04-01
In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.
Near-threshold boson pair production in the model of smeared-mass unstable particles
Kuksa, V. I.; Pasechnik, R. S.
2010-09-15
Near-threshold production of boson pairs is considered within the framework of the model of unstable particles with smeared mass. We describe the principal aspects of the model and consider the strategy of calculations including the radiative corrections. The results of calculations are in good agreement with LEP II data and Monte-Carlo simulations. Suggested approach significantly simplifies calculations with respect to the standard perturbative one.
NASA Astrophysics Data System (ADS)
O'Grady, E. A.; Culloty, S. C.; Kelly, T. C.; O'Callaghan, M. J. A.; Rachinskii, D.
2015-02-01
Thresholds occur, and play an important role, in the dynamics of many biological communities. In this paper, we model a persistence type threshold which has been shown experimentally to exist in hyperparasitised flukes in the cockle, a shellfish. Our model consists of a periodically driven slow-fast host-parasite system of equations for a slow flukes population (host) and a fast Unikaryon hyperparasite population (parasite). The model exhibits two branches of the critical curve crossing in a transcritical bifurcation scenario. We discuss two thresholds due to immediate and delayed exchange of stability effects; and we derive algebraic relationships for parameters of the periodic solution in the limit of the infinite ratio of the time scales. Flukes, which are the host species in our model, parasitise cockles and in turn are hyperparasitised by the microsporidian Unikaryon legeri; the life cycle of flukes includes several life stages and a number of different hosts. That is, the flukes-hyperparasite system in a cockle is, naturally, part of a larger estuarine ecosystem of interacting species involving parasites, shellfish and birds which prey on shellfish. A population dynamics model which accounts for one system of such multi-species interactions and includes the fluke-hyperparasite model in a cockle as a subsystem is presented. We provide evidence that the threshold effect we observed in the flukes-hyperparasite subsystem remains apparent in the multi-species system. Assuming that flukes damage cockles, and taking into account that the hyperparasite is detrimental to flukes, it is natural to suggest that the hyperparasitism may support the abundance of cockles and, thereby, the persistence of the estuarine ecosystem, including shellfish and birds. We confirm the possibility of the existence of this scenario in our model, at least partially, by removing the hyperparasite and demonstrating that this may result in a substantial drop in cockle numbers. The result
Evaluation of a threshold-based model of fatigue in gamma titanium aluminide following impact damage
NASA Astrophysics Data System (ADS)
Harding, Trevor Scott
2000-10-01
Recent interest in gamma titanium aluminide (gamma-TiAl) for use in gas turbine engine applications has centered on the low density and good elevated temperature strength retention of gamma-TiAl compared to current materials. However, the relatively low ductility and fracture toughness of gamma-TiAl leads to serious concerns regarding its ability to resist impact damage. Furthermore, the limited fatigue crack growth resistance of gamma-TiAl means that the potential for fatigue failures resulting from impact damage is real if a damage tolerant design approach is used. A threshold-based design approach may be required if fatigue crack growth from potential impact sites is to be avoided. The objective of the present research is to examine the feasibility of a threshold-based approach for the design of a gamma-TiAl low-pressure turbine blade subjected to both assembly-related impact damage and foreign object damage. Specimens of three different gamma-TiAl alloys were damaged in such a way as to simulate anticipated impact damage for a turbine blade. Step-loading fatigue tests were conducted at both room temperature and 600°C. In terms of the assembly-related impact damage, the results indicate that there is reasonably good agreement between the threshold-based predictions of the fatigue strength of damaged specimens and the measured data. However, some discrepancies do exist. In the case of very lightly damaged specimens, prediction of the resulting fatigue strength requires that a very conservative small-crack fatigue threshold be used. Consequently, the allowable design conditions are significantly reduced. For severely damaged specimens, an analytical approach found that the potential effects of residual stresses may be related to the discrepancies observed between the threshold-based model and measured fatigue strength data. In the case of foreign object damage, a good correlation was observed between impacts resulting in large cracks and a long-crack threshold
O'Brien, Gabrielle E; Imennov, Nikita S; Rubinstein, Jay T
2016-05-01
Modulation detection thresholds (MDTs) assess listeners' sensitivity to changes in the temporal envelope of a signal and have been shown to strongly correlate with speech perception in cochlear implant users. MDTs are simulated with a stochastic model of a population of auditory nerve fibers that has been verified to accurately simulate a number of physiologically important temporal response properties. The procedure to estimate detection thresholds has previously been applied to stimulus discrimination tasks. The population model simulates the MDT-stimulus intensity relationship measured in cochlear implant users. The model also recreates the shape of the modulation transfer function and the relationship between MDTs and carrier rate. Discrimination based on fluctuations in synchronous firing activity predicts better performance at low carrier rates, but quantitative measures of modulation coding predict better neural representation of high carrier rate stimuli. Manipulating the number of fibers and a temporal integration parameter, the width of a sliding temporal integration window, varies properties of the MDTs, such as cutoff frequency and peak threshold. These results demonstrate the importance of using a multi-diameter fiber population in modeling the MDTs and demonstrate a wider applicability of this model to simulating behavioral performance in cochlear implant listeners. PMID:27250141
NASA Astrophysics Data System (ADS)
Gupta, Santosh Kumar
2015-12-01
2D Analytical model of the body center potential (BCP) in short channel junctionless Cylindrical Surrounding Gate (JLCSG) MOSFETs is developed using evanescent mode analysis (EMA). This model also incorporates the gate bias dependent inner and outer fringing capacitances due to the gate-source/drain fringing fields. The developed model provides results in good agreement with simulated results for variations of different physical parameters of JLCSG MOSFET viz. gate length, channel radius, doping concentration, and oxide thickness. Using the BCP, an analytical model for the threshold voltage has been derived and validated against results obtained from 3D device simulator.
NASA Astrophysics Data System (ADS)
Szczygieł, Bartłomiej; Dudyński, Marek; Kwiatkowski, Kamil; Lewenstein, Maciej; Lapeyre, Gerald John; Wehr, Jan
2016-02-01
We introduce a class of discrete-continuous percolation models and an efficient Monte Carlo algorithm for computing their properties. The class is general enough to include well-known discrete and continuous models as special cases. We focus on a particular example of such a model, a nanotube model of disintegration of activated carbon. We calculate its exact critical threshold in two dimensions and obtain a Monte Carlo estimate in three dimensions. Furthermore, we use this example to analyze and characterize the efficiency of our algorithm, by computing critical exponents and properties, finding that it compares favorably to well-known algorithms for simpler systems.
ERIC Educational Resources Information Center
Broder, Arndt; Schutz, Julia
2009-01-01
Recent reviews of recognition receiver operating characteristics (ROCs) claim that their curvilinear shape rules out threshold models of recognition. However, the shape of ROCs based on confidence ratings is not diagnostic to refute threshold models, whereas ROCs based on experimental bias manipulations are. Also, fitting predicted frequencies to…
Computational model of collective nest selection by ants with heterogeneous acceptance thresholds.
Masuda, Naoki; O'shea-Wheller, Thomas A; Doran, Carolina; Franks, Nigel R
2015-06-01
Collective decision-making is a characteristic of societies ranging from ants to humans. The ant Temnothorax albipennis is known to use quorum sensing to collectively decide on a new home; emigration to a new nest site occurs when the number of ants favouring the new site becomes quorate. There are several possible mechanisms by which ant colonies can select the best nest site among alternatives based on a quorum mechanism. In this study, we use computational models to examine the implications of heterogeneous acceptance thresholds across individual ants in collective nest choice behaviour. We take a minimalist approach to develop a differential equation model and a corresponding non-spatial agent-based model. We show, consistent with existing empirical evidence, that heterogeneity in acceptance thresholds is a viable mechanism for efficient nest choice behaviour. In particular, we show that the proposed models show speed-accuracy trade-offs and speed-cohesion trade-offs when we vary the number of scouts or the quorum threshold. PMID:26543578
Computational model of collective nest selection by ants with heterogeneous acceptance thresholds
Masuda, Naoki; O'shea-Wheller, Thomas A.; Doran, Carolina; Franks, Nigel R.
2015-01-01
Collective decision-making is a characteristic of societies ranging from ants to humans. The ant Temnothorax albipennis is known to use quorum sensing to collectively decide on a new home; emigration to a new nest site occurs when the number of ants favouring the new site becomes quorate. There are several possible mechanisms by which ant colonies can select the best nest site among alternatives based on a quorum mechanism. In this study, we use computational models to examine the implications of heterogeneous acceptance thresholds across individual ants in collective nest choice behaviour. We take a minimalist approach to develop a differential equation model and a corresponding non-spatial agent-based model. We show, consistent with existing empirical evidence, that heterogeneity in acceptance thresholds is a viable mechanism for efficient nest choice behaviour. In particular, we show that the proposed models show speed–accuracy trade-offs and speed–cohesion trade-offs when we vary the number of scouts or the quorum threshold. PMID:26543578
Gauge threshold corrections for {N}=2 heterotic local models with flux, and mock modular forms
NASA Astrophysics Data System (ADS)
Carlevaro, Luca; Israël, Dan
2013-03-01
We determine threshold corrections to the gauge couplings in local models of {N}=2 smooth heterotic compactifications with torsion, given by the direct product of a warped Eguchi-Hanson space and a two-torus, together with a line bundle. Using the worldsheet cft description previously found and by suitably regularising the infinite target space volume divergence, we show that threshold corrections to the various gauge factors are governed by the non-holomorphic completion of the Appell-Lerch sum. While its holomorphic Mock-modular component captures the contribution of states that localise on the blown-up two-cycle, the non-holomorphic correction originates from non-localised bulk states. We infer from this analysis universality properties for {N}=2 heterotic local models with flux, based on target space modular invariance and the presence of such non-localised states. We finally determine the explicit dependence of these one-loop gauge threshold corrections on the moduli of the two-torus, and by S-duality we extract the corresponding string-loop and E1-instanton corrections to the Kähler potential and gauge kinetic functions of the dual type i model. In both cases, the presence of non-localised bulk states brings about novel perturbative and non-perturbative corrections, some features of which can be interpreted in the light of analogous corrections to the effective theory in compact models.
Jahangiri, Anila F.; Gerling, Gregory J.
2011-01-01
The Leaky Integrate and Fire (LIF) model of a neuron is one of the best known models for a spiking neuron. A current limitation of the LIF model is that it may not accurately reproduce the dynamics of an action potential. There have recently been some studies suggesting that a LIF coupled with a multi-timescale adaptive threshold (MAT) may increase LIF’s accuracy in predicting spikes in cortical neurons. We propose a mechanotransduction process coupled with a LIF model with multi-timescale adaptive threshold to model slowly adapting type I (SAI) mechanoreceptor in monkey’s glabrous skin. In order to test the performance of the model, the spike timings predicted by this MAT model are compared with neural data. We also test a fixed threshold variant of the model by comparing its outcome with the neural data. Initial results indicate that the MAT model predicts spike timings better than a fixed threshold LIF model only. PMID:21814636
Two-threshold model for scaling laws of noninteracting snow avalanches.
Faillettaz, Jerome; Louchet, Francois; Grasso, Jean-Robert
2004-11-12
The sizes of snow slab failure that trigger snow avalanches are power-law distributed. Such a power-law probability distribution function has also been proposed to characterize different landslide types. In order to understand this scaling for gravity-driven systems, we introduce a two-threshold 2D cellular automaton, in which failure occurs irreversibly. Taking snow slab avalanches as a model system, we find that the sizes of the largest avalanches just preceding the lattice system breakdown are power-law distributed. By tuning the maximum value of the ratio of the two failure thresholds our model reproduces the range of power-law exponents observed for land, rock, or snow avalanches. We suggest this control parameter represents the material cohesion anisotropy. PMID:15600971
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza
The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work
Inohara, Ken; Sumita, Yuka I; Ohbayashi, Naoto; Ino, Shuichi; Kurabayashi, Tohru; Ifukube, Tohru; Taniguchi, Hisashi
2010-07-01
Postoperative head and neck cancer patients suffer from speech disorders, which are the result of changes in their vocal tracts. Making a solid vocal tract model and measuring its transmission characteristics will provide one of the most useful tools to resolve the problem. In binary conversion of X-ray computed tomographic (CT) images for vocal tract reconstruction, nonobjective methods have been used by many researchers. We hypothesized that a standardized vocal tract model could be reconstructed by adopting the Hounsfield number of fat tissue as a criterion for thresholding of binary conversion, because its Hounsfield number is the nearest to air in the human body. The purpose of this study was to establish a new standardized method for binary conversion in reconstructing three-dimensional (3-D) vocal tract models. CT images for postoperative diagnosis were secondarily obtained from a CT scanner. Each patient's minimum settings of Hounsfield number for the buccal fat-pad regions were measured. Thresholds were set every 50 Hounsfield units (HU) from the bottom line of the buccal fat-pad region to -1024 HU, the images were converted into binary values, and were evaluated according to the three-grade system based on anatomically defined criteria. The optimal threshold between tissue and air was determined by nonlinear multiple regression analyses. Each patient's minimum settings of the buccal fat-pad regions were obtained. The optimal threshold was determined to be -165 HU from each patient's minimum settings of the Hounsfield number for the buccal fat-pad regions. To conclude, a method of 3-D standardized vocal tract modeling was established. PMID:19766442
Cavanaugh, Kyle C; Parker, John D; Cook-Patton, Susan C; Feller, Ilka C; Williams, A Park; Kellner, James R
2015-05-01
Predictions of climate-related shifts in species ranges have largely been based on correlative models. Due to limitations of these models, there is a need for more integration of experimental approaches when studying impacts of climate change on species distributions. Here, we used controlled experiments to identify physiological thresholds that control poleward range limits of three species of mangroves found in North America. We found that all three species exhibited a threshold response to extreme cold, but freeze tolerance thresholds varied among species. From these experiments, we developed a climate metric, freeze degree days (FDD), which incorporates both the intensity and the frequency of freezes. When included in distribution models, FDD accurately predicted mangrove presence/absence. Using 28 years of satellite imagery, we linked FDD to observed changes in mangrove abundance in Florida, further exemplifying the importance of extreme cold. We then used downscaled climate projections of FDD to project that these range limits will move northward by 2.2-3.2 km yr(-1) over the next 50 years. PMID:25558057
Generalizing a complex model for gully threshold identification in the Mediterranean environment
NASA Astrophysics Data System (ADS)
Torri, D.; Borselli, L.; Iaquinta, P.; Iovine, G.; Poesen, J.; Terranova, O.
2012-04-01
Among the physical processes leading to land degradation, soil erosion by water is the most important and gully erosion may contribute, at places, to 70% of the total soil loss. Nevertheless, gully erosion has often been neglected in water soil erosion modeling, whilst more prominence has been given to rill and interrill erosion. Both to facilitate the processing by agricultural machinery and to take advantage of all the arable land, gullies are commonly removed at each crop cycle, with significant soil losses due to the repeated excavation of the channel by the successive rainstorm. When the erosive forces of overland flow exceed the strength of the soil particles to detachment and displacement, water erosion occurs and usually a channel is formed. As runoff is proportional to the local catchment area, a relationship between local slope, S, and contributing area, A, is supposed to exists. A "geomorphologic threshold" scheme is therefore suitable to interpret the physical process of gully initiation: accordingly, a gully is formed when a hydraulic threshold for incision exceeds the resistance of the soil particles to detachment and transport. Similarly, it appears reasonable that a gully ends when there is a reduction of slope, or the concentrated flow meets more resistant soil-vegetation complexes. This study aims to predict the location of the beginning of gullies in the Mediterranean environment, based on an evaluation of S and A by means of a mathematical model. For the identification of the areas prone to gully erosion, the model employs two empirical thresholds relevant to the head (Thead) and to the end (Tend) of the gullies (of the type SA^ b>Thead, SA^ b
Forutan, M; Ansari Mahyari, S; Sargolzaei, M
2015-02-01
Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection. PMID:25100295
Threshold models for genome-enabled prediction of ordinal categorical traits in plant breeding.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo; Eskridge, Kent; Crossa, José
2015-02-01
Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9-14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. PMID:25538102
Threshold Models for Genome-Enabled Prediction of Ordinal Categorical Traits in Plant Breeding
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de los Campos, Gustavo; Eskridge, Kent; Crossa, José
2014-01-01
Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9–14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. PMID:25538102
NASA Astrophysics Data System (ADS)
Eastoe, Emma F.; Tawn, Jonathan A.
2010-02-01
In a peaks over threshold analysis of a series of river flows, a sufficiently high threshold is used to extract the peaks of independent flood events. This paper reviews existing, and proposes new, statistical models for both the annual counts of such events and the process of event peak times. The most common existing model for the process of event times is a homogeneous Poisson process. This model is motivated by asymptotic theory. However, empirical evidence suggests that it is not the most appropriate model, since it implies that the mean and variance of the annual counts are the same, whereas the counts appear to be overdispersed, i.e., have a larger variance than mean. This paper describes how the homogeneous Poisson process can be extended to incorporate time variation in the rate at which events occur and so help to account for overdispersion in annual counts through the use of regression and mixed models. The implications of these new models on the implied probability distribution of the annual maxima are also discussed. The models are illustrated using a historical flow series from the River Thames at Kingston.
An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures
Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong
2016-01-01
Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878
Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.
An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.
Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong
2016-01-01
Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878
Reentry Near the Percolation Threshold in a Heterogeneous Discrete Model for Cardiac Tissue
NASA Astrophysics Data System (ADS)
Alonso, Sergio; Bär, Markus
2013-04-01
Arrhythmias in cardiac tissue are related to irregular electrical wave propagation in the heart. Cardiac tissue is formed by a discrete cell network, which is often heterogeneous. A localized region with a fraction of nonconducting links surrounded by homogeneous conducting tissue can become a source of reentry and ectopic beats. Extensive simulations in a discrete model of cardiac tissue show that a wave crossing a heterogeneous region of cardiac tissue can disintegrate into irregular patterns, provided the fraction of nonconducting links is close to the percolation threshold of the cell network. The dependence of the reentry probability on this fraction, the system size, and the degree of excitability can be inferred from the size distribution of nonconducting clusters near the percolation threshold.
Critical Thresholds and the Limit Distribution in the Bak-Sneppen Model
NASA Astrophysics Data System (ADS)
Meester, Ronald; Znamenski, Dmitri
One of the key problems related to the Bak-Sneppen evolution model is to compute the limit distribution of the fitnesses in the stationary regime, as the size of the system tends to infinity. Simulations in [3, 1, 4] suggest that the one-dimensional limit marginal distribution is uniform on (pc, 1), for some pc 0.667. In this paper we define three critical thresholds related to avalanche characteristics. We prove that if these critical thresholds are the same and equal to some pc (we can only prove that two of them are the same) then the limit distribution is the product of uniform distributions on (pc, 1), and moreover pc<0.75. Our proofs are based on a self-similar graphical representation of the avalanches.
A piecewise model of virus-immune system with two thresholds.
Tang, Biao; Xiao, Yanni; Wu, Jianhong
2016-08-01
The combined antiretroviral therapy with interleukin (IL)-2 treatment may not be enough to preclude exceptionally high growth of HIV virus nor rebuilt the HIV-specific CD4 or CD8 T-cell proliferative immune response for management of HIV infected patients. Whether extra inclusion of immune therapy can induce the HIV-specific immune response and control HIV replication remains challenging. Here a piecewise virus-immune model with two thresholds is proposed to represent the HIV-1 RNA and effector cell-guided therapy strategies. We first analyze the dynamics of the virus-immune system with effector cell-guided immune therapy only and prove that there exists a critical level of the intensity of immune therapy determining whether the HIV-1 RAN virus loads can be controlled below a relative low level. Our analysis of the global dynamics of the proposed model shows that the pseudo-equilibrium can be globally stable or locally bistable with order 1 periodic solution or bistable with the virus-free periodic solution under various appropriate conditions. This indicates that HIV viral loads can either be eradicated or stabilize at a previously given level or go to infinity (corresponding to the effector cells oscillating), depending on the threshold levels and the initial HIV virus loads and effector cell counts. Comparing with the single threshold therapy strategy we obtain that with two thresholds therapy strategies either virus can be eradicated or the controllable region, where HIV viral loads can be maintained below a certain value, can be enlarged. PMID:27321193
On the thresholds, probability densities, and critical exponents of Bak-Sneppen-like models
NASA Astrophysics Data System (ADS)
Garcia, Guilherme J. M.; Dickman, Ronald
2004-10-01
We report a simple method to accurately determine the threshold and the exponent ν of the Bak-Sneppen (BS) model and also investigate the BS universality class. For the random-neighbor version of the BS model, we find the threshold x ∗=0.33332(3) , in agreement with the exact result x ∗= {1}/{3} given by mean-field theory. For the one-dimensional original model, we find x ∗=0.6672(2) in good agreement with the results reported in the literature; for the anisotropic BS model we obtain x ∗=0.7240(1) . We study the finite size effect x ∗(L)-x ∗(L→∞)∝L -ν, observed in a system with L sites, and find ν=1.00(1) for the random-neighbor version, ν=1.40(1) for the original model, and ν=1.58(1) for the anisotropic case. Finally, we discuss the effect of defining the extremal site as the one which minimizes a general function f( x), instead of simply f( x)= x as in the original updating rule. We emphasize that models with extremal dynamics have singular stationary probability distributions p( x). Our simulations indicate the existence of two symmetry-based universality classes.
Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.
Ottino-Löffler, Bertrand; Strogatz, Steven H
2016-06-01
We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics. PMID:27415267
Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold
NASA Astrophysics Data System (ADS)
Ottino-Löffler, Bertrand; Strogatz, Steven H.
2016-06-01
We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N , is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N , the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N ≫1 . The leading correction to the infinite-N result scales like either N-3 /2 or N-1, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005), 10.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.
Impact of slow K(+) currents on spike generation can be described by an adaptive threshold model.
Kobayashi, Ryota; Kitano, Katsunori
2016-06-01
A neuron that is stimulated by rectangular current injections initially responds with a high firing rate, followed by a decrease in the firing rate. This phenomenon is called spike-frequency adaptation and is usually mediated by slow K(+) currents, such as the M-type K(+) current (I M ) or the Ca(2+)-activated K(+) current (I AHP ). It is not clear how the detailed biophysical mechanisms regulate spike generation in a cortical neuron. In this study, we investigated the impact of slow K(+) currents on spike generation mechanism by reducing a detailed conductance-based neuron model. We showed that the detailed model can be reduced to a multi-timescale adaptive threshold model, and derived the formulae that describe the relationship between slow K(+) current parameters and reduced model parameters. Our analysis of the reduced model suggests that slow K(+) currents have a differential effect on the noise tolerance in neural coding. PMID:27085337
Cumulative t-link threshold models for the genetic analysis of calving ease scores
Kizilkaya, Kadir; Carnier, Paolo; Albera, Andrea; Bittante, Giovanni; Tempelman, Robert J
2003-01-01
In this study, a hierarchical threshold mixed model based on a cumulative t-link specification for the analysis of ordinal data or more, specifically, calving ease scores, was developed. The validation of this model and the Markov chain Monte Carlo (MCMC) algorithm was carried out on simulated data from normally and t4 (i.e. a t-distribution with four degrees of freedom) distributed populations using the deviance information criterion (DIC) and a pseudo Bayes factor (PBF) measure to validate recently proposed model choice criteria. The simulation study indicated that although inference on the degrees of freedom parameter is possible, MCMC mixing was problematic. Nevertheless, the DIC and PBF were validated to be satisfactory measures of model fit to data. A sire and maternal grandsire cumulative t-link model was applied to a calving ease dataset from 8847 Italian Piemontese first parity dams. The cumulative t-link model was shown to lead to posterior means of direct and maternal heritabilities (0.40 ± 0.06, 0.11 ± 0.04) and a direct maternal genetic correlation (-0.58 ± 0.15) that were not different from the corresponding posterior means of the heritabilities (0.42 ± 0.07, 0.14 ± 0.04) and the genetic correlation (-0.55 ± 0.14) inferred under the conventional cumulative probit link threshold model. Furthermore, the correlation (> 0.99) between posterior means of sire progeny merit from the two models suggested no meaningful rerankings. Nevertheless, the cumulative t-link model was decisively chosen as the better fitting model for this calving ease data using DIC and PBF. PMID:12939202
Effects of temporal correlations on cascades: Threshold models on temporal networks
NASA Astrophysics Data System (ADS)
Backlund, Ville-Pekka; Saramäki, Jari; Pan, Raj Kumar
2014-06-01
A person's decision to adopt an idea or product is often driven by the decisions of peers, mediated through a network of social ties. A common way of modeling adoption dynamics is to use threshold models, where a node may become an adopter given a high enough rate of contacts with adopted neighbors. We study the dynamics of threshold models that take both the network topology and the timings of contacts into account, using empirical contact sequences as substrates. The models are designed such that adoption is driven by the number of contacts with different adopted neighbors within a chosen time. We find that while some networks support cascades leading to network-level adoption, some do not: the propagation of adoption depends on several factors from the frequency of contacts to burstiness and timing correlations of contact sequences. More specifically, burstiness is seen to suppress cascade sizes when compared to randomized contact timings, while timing correlations between contacts on adjacent links facilitate cascades.
Albert, Carlo; Vogel, Sören
2016-01-01
The General Unified Threshold model of Survival (GUTS) provides a consistent mathematical framework for survival analysis. However, the calibration of GUTS models is computationally challenging. We present a novel algorithm and its fast implementation in our R package, GUTS, that help to overcome these challenges. We show a step-by-step application example consisting of model calibration and uncertainty estimation as well as making probabilistic predictions and validating the model with new data. Using self-defined wrapper functions, we show how to produce informative text printouts and plots without effort, for the inexperienced as well as the advanced user. The complete ready-to-run script is available as supplemental material. We expect that our software facilitates novel re-analysis of existing survival data as well as asking new research questions in a wide range of sciences. In particular the ability to quickly quantify stressor thresholds in conjunction with dynamic compensating processes, and their uncertainty, is an improvement that complements current survival analysis methods. PMID:27340823
A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors.
Dezhdar, Tara; Moshourab, Rabih A; Fründ, Ingo; Lewin, Gary R; Schmuker, Michael
2015-01-01
The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor's temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830
A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors
NASA Astrophysics Data System (ADS)
Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael
2015-12-01
The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails.
A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors
Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael
2015-01-01
The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830
Spinodals, scaling, and ergodicity in a threshold model with long-range stress transfer
Ferguson, C.D.; Klein, W.; Rundle, J.B.
1999-08-01
We present both theoretical and numerical analyses of a cellular automaton version of a slider-block model or threshold model that includes long-range interactions. Theoretically we develop a coarse-grained description in the mean-field (infinite range) limit and discuss the relevance of the metastable state, limit of stability (spinodal), and nucleation to the phenomenology of the model. We also simulate the model and confirm the relevance of the theory for systems with long- but finite-range interactions. Results of particular interest include the existence of Gutenberg-Richter-like scaling consistent with that found on real earthquake fault systems, the association of large events with nucleation near the spinodal, and the result that such systems can be described, in the mean-field limit, with techniques appropriate to systems in equilibrium. {copyright} {ital 1999} {ital The American Physical Society}
Bus mathematical model of acceleration threshold limit estimation in lateral rollover test
NASA Astrophysics Data System (ADS)
Gauchía, A.; Olmeda, E.; Aparicio, F.; Díaz, V.
2011-10-01
Vehicle safety is a major concerns for researchers, governments and vehicle manufacturers, and therefore a special attention is paid to it. Particularly, rollover is one of the types of accidents where researchers have focused due to the gravity of injuries and the social impact it generates. One of the parameters that define bus lateral behaviour is the acceleration threshold limit, which is defined as the lateral acceleration from which the rollover process begins to take place. This parameter can be obtained by means of a lateral rollover platform test or estimated by means of mathematical models. In this paper, the differences between these methods are deeply analysed, and a new mathematical model is proposed to estimate the acceleration threshold limit in the lateral rollover test. The proposed model simulates the lateral rollover test, and, for the first time, it includes the effect of a variable position of the centre of gravity. Finally, the maximum speed at which the bus can travel in a bend without rolling over is computed.
Tang, Sanyi; Liang, Juhua; Tan, Yuanshun; Cheke, Robert A
2013-01-01
Impulsive differential equations (hybrid dynamical systems) can provide a natural description of pulse-like actions such as when a pesticide kills a pest instantly. However, pesticides may have long-term residual effects, with some remaining active against pests for several weeks, months or years. Therefore, a more realistic method for modelling chemical control in such cases is to use continuous or piecewise-continuous periodic functions which affect growth rates. How to evaluate the effects of the duration of the pesticide residual effectiveness on successful pest control is key to the implementation of integrated pest management (IPM) in practice. To address these questions in detail, we have modelled IPM including residual effects of pesticides in terms of fixed pulse-type actions. The stability threshold conditions for pest eradication are given. Moreover, effects of the killing efficiency rate and the decay rate of the pesticide on the pest and on its natural enemies, the duration of residual effectiveness, the number of pesticide applications and the number of natural enemy releases on the threshold conditions are investigated with regard to the extent of depression or resurgence resulting from pulses of pesticide applications and predator releases. Latin Hypercube Sampling/Partial Rank Correlation uncertainty and sensitivity analysis techniques are employed to investigate the key control parameters which are most significantly related to threshold values. The findings combined with Volterra's principle confirm that when the pesticide has a strong effect on the natural enemies, repeated use of the same pesticide can result in target pest resurgence. The results also indicate that there exists an optimal number of pesticide applications which can suppress the pest most effectively, and this may help in the design of an optimal control strategy. PMID:22205243
Low dimensional model of heart rhythm dynamics as a tool for diagnosing the anaerobic threshold
Anosov, O.L.; Butkovskii, O.Y.; Kadtke, J.; Kravtsov, Y.A.
1997-05-01
We report preliminary results on describing the dependence of the heart rhythm variability on the stress level by using qualitative, low dimensional models. The reconstruction of macroscopic heart models yielding cardio cycles (RR-intervals) duration was based on actual clinical data. Our results show that the coefficients of the low dimensional models are sensitive to metabolic changes. In particular, at the transition between aerobic and aerobic-anaerobic metabolism, there are pronounced extrema in the functional dependence of the coefficients on the stress level. This strong sensitivity can be used to design an easy indirect method for determining the anaerobic threshold. This method could replace costly and invasive traditional methods such as gas analysis and blood tests. {copyright} {ital 1997 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Chitu, Zenaida; Busuioc, Aristita; Burcea, Sorin; Sandric, Ionut
2016-04-01
This work focuses on the hydro-meteorological analysis for landslide triggering rainfall thresholds estimation in the Ialomita Subcarpathians. This specific area is a complex geological and geomorphic unit in Romania, affected by landslides that produce numerous damages to the infrastructure every few years (1997, 1998, 2005, 2006, 2010, 2012 and 2014). Semi-distributed ModClark hydrological model implemented in HEC HMS software that integrates radar rainfall data, was used to investigate hydrological conditions within the catchment responsible for the occurrence of landslides during the main rainfall events. Statistical analysis of the main hydro-meteorological variables during the landslide events that occurred between 2005 and 2014 was carried out in order to identify preliminary rainfall thresholds for landslides in the Ialomita Subcarpathians. Moreover, according to the environmental catchment characteristics, different hydrological behaviors could be identified based on the spatially distributed rainfall estimates from weather radar data. Two hydrological regimes in the catchments were distinguished: one dominated by direct flow that explains the landslides that occurred due to slope undercutting and one characterized by high soil water storage during prolonged rainfall and therefore where subsurface runoff is significant. The hydrological precipitation-discharge modelling of the catchment in the Ialomita Subcarpathians, in which landslides occurred, helped understanding the landslide triggering and as such can be of added value for landslide research.
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data. PMID:17610294
NN-->NNπ reaction near threshold in a covariant one-boson-exchange model
NASA Astrophysics Data System (ADS)
Shyam, R.; Mosel, U.
1998-04-01
We calculate the cross sections for the p(p,nπ+)p and p(p,pπ0)p reactions for proton beam energies near threshold in a covariant one-boson-exchange model, which incorporates the exchange of π, ρ, σ and ω mesons, treats both nucleon and delta isobar as intermediate states. The final state interaction effects are included within the Watson's theory. Within this model the ω and σ meson exchange terms contribute significantly at these energies, which, along with other meson exchanges, make it possible to reproduce the available experimental data for the total as well as differential cross sections for both the reactions. The cross sections at beam energies <=300 MeV are found to be almost free from the contributions of the Δ isobar excitation.
NASA Astrophysics Data System (ADS)
Sánchez, R.; van Milligen, B. Ph.; Carreras, B. A.
2005-05-01
It is argued that the modeling of plasma transport in tokamaks may benefit greatly from extending the usual local paradigm to accommodate scale-free transport mechanisms. This can be done by combining Lévy distributions and a nonlinear threshold condition within the continuous time random walk concept. The advantages of this nonlocal, nonlinear extension are illustrated by constructing a simple particle density transport model that, as a result of these ideas, spontaneously exhibits much of nondiffusive phenomenology routinely observed in tokamaks. The fluid limit of the system shows that the kind of equations that are appropriate to capture these dynamics are based on fractional differential operators. In them, effective diffusivities and pinch velocities are found that are dynamically set by the system in response to the specific characteristics of the fueling source and external perturbations. This fact suggests some dramatic consequences for the extrapolation of these transport properties to larger size systems.
Turk, Bela R; Gschwandtner, Michael E; Mauerhofer, Michaela; Löffler-Stastka, Henriette
2015-05-01
The vascular depression (VD) hypothesis postulates that cerebrovascular disease may "predispose, precipitate, or perpetuate" a depressive syndrome in elderly patients. Clinical presentation of VD has been shown to differ to major depression in quantitative disability; however, as little research has been made toward qualitative phenomenological differences in the personality aspects of the symptom profile, clinical diagnosis remains a challenge.We attempted to identify differences in clinical presentation between depression patients (n = 50) with (n = 25) and without (n = 25) vascular disease using questionnaires to assess depression, affect regulation, object relations, aggressiveness, alexithymia, personality functioning, personality traits, and counter transference.We were able to show that patients with vascular dysfunction and depression exhibit significantly higher aggressive and auto-aggressive tendencies due to a lower tolerance threshold. These data indicate that VD is a separate clinical entity and secondly that the role of personality itself may be a component of the disease process. We propose an expanded threshold disease model incorporating personality functioning and mood changes. Such findings might also aid the development of a screening program, by serving as differential criteria, ameliorating the diagnostic procedure. PMID:25950684
Nassiri-Asl, Marjan; Hajiali, Farid; Taghiloo, Mina; Abbasi, Esmail; Mohseni, Fatemeh; Yousefi, Farbod
2016-05-01
Flavonoids are important constituents of food and beverages, and several studies have shown that they have neuroactive properties. Many of these compounds are ligands for γ-aminobutyric acid type A receptors in the central nervous system. This study aimed to investigate the anticonvulsant effects of quercetin (3,3',4',5,7-pentahydroxyflavone), which is a flavonoid found in plants, in rats treated with pentylenetetrazole in acute and chronic seizure models. Single intraperitoneal administration of quercetin did not show anticonvulsive effects against acute seizure. Similarly, multiple oral pretreatment with quercetin did not have protective effects against acute seizure. However, multiple intraperitoneal administration of quercetin (25 and 50 mg/kg) significantly increased time to death compared with the control (p < 0.001). However, quercetin pretreatment had no significant effects on the pattern of convulsion development during all periods of kindling. But on the test day, quercetin (100 mg/kg) could significantly increase generalized tonic-clonic seizure onset (GTCS) and decrease GTCS duration compared with the control (p < 0.01, p < 0.05). We conclude that quercetin has a narrow therapeutic dose range for anticonvulsant activities in vivo, and it has different effects on the seizure threshold. The different effects of quercetin on seizure threshold may occur through several mechanisms. PMID:24442347
A model for calculating the threshold for shock initiation of pyrotechnics and explosives
Maiden, D.E.
1987-03-01
A model is proposed for predicting the shock pressure P and pulse pulse width ..pi.. required to ignite porous reactive mixtures. Essentially, the shock wave collapses the voids, forming high-temperature hot spots that ignite the mixture. The pore temperature is determined by numerical solution of the equations of motion, viscoplastic heating, and heat conduction. The pore radius is determined as a function of the pore size, viscosity, yield stress, and pressure. Temperature-dependent material properties and melting are considered. Ignition occurs when the surface temperature of the pore reaches the critical hot-spot temperature for thermal runaway. Data from flyer-plate impact experiments were analyzed and the pressure pulse at the ignition threshold was determined for 2Al/Fe/sub 2/O/sub 3/ (thermite) and the high explosives TATB, PBX 9404, and PETN. Mercury intrusion porosimetry was performed on the samples and the pore size distribution determined. Theoretical and numerical predictions of the ignition threshold are compared with experiment. Results show that P/sup 2/..pi.. appears to be an initiation property of the material.
The minimal SUSY B - L model: simultaneous Wilson lines and string thresholds
NASA Astrophysics Data System (ADS)
Deen, Rehan; Ovrut, Burt A.; Purves, Austin
2016-07-01
In previous work, we presented a statistical scan over the soft supersymmetry breaking parameters of the minimal SUSY B - L model. For specificity of calculation, unification of the gauge parameters was enforced by allowing the two Z_3× Z_3 Wilson lines to have mass scales separated by approximately an order of magnitude. This introduced an additional "left-right" sector below the unification scale. In this paper, for three important reasons, we modify our previous analysis by demanding that the mass scales of the two Wilson lines be simultaneous and equal to an "average unification" mass < M U >. The present analysis is 1) more "natural" than the previous calculations, which were only valid in a very specific region of the Calabi-Yau moduli space, 2) the theory is conceptually simpler in that the left-right sector has been removed and 3) in the present analysis the lack of gauge unification is due to threshold effects — particularly heavy string thresholds, which we calculate statistically in detail. As in our previous work, the theory is renormalization group evolved from < M U > to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ˜125 GeV. The subspace of soft supersymmetry breaking masses that satisfies all such constraints is presented and shown to be substantial.
Thresholds in Atmosphere-Soil Moisture Interactions: Results from Climate Model Studies
NASA Technical Reports Server (NTRS)
Oglesby, Robert J.; Marshall, Susan; Erickson, David J., III; Roads, John O.; Robertson, Franklin R.; Arnold, James E. (Technical Monitor)
2001-01-01
The potential predictability of the effects of warm season soil moisture anomalies over the central U.S. has been investigated using a series of GCM (Global Climate Model) experiments with the NCAR (National Center for Atmospheric Research) CCM3 (Community Climate Model version 3)/LSM (Land Surface Model). Three different types of experiments have been made, all starting in either March (representing precursor conditions) or June (conditions at the onset of the warm season): (1) 'anomaly' runs with large, exaggerated initial soil moisture reductions, aimed at evaluating the physical mechanisms by which soil moisture can affect the atmosphere; (2) 'predictability' runs aimed at evaluating whether typical soil moisture initial anomalies (indicative of year-to-year variability) can have a significant effect, and if so, for how long; (3) 'threshold' runs aimed at evaluating if a soil moisture anomaly must be of a specific size (i.e., a threshold crossed) before a significant impact on the atmosphere is seen. The 'anomaly' runs show a large, long-lasting response in soil moisture and also quantities such as surface temperature, sea level pressure, and precipitation; effects persist for at least a year. The 'predictability' runs, on the other hand, show very little impact of the initial soil moisture anomalies on the subsequent evolution of soil moisture and other atmospheric parameters; internal variability is most important, with the initial state of the atmosphere (representing remote effects such as SST anomalies) playing a more minor role. The 'threshold' runs, devised to help resolve the dichotomy in 'anomaly' and 'predictability' results, suggest that, at least in CCM3/LSM, the vertical profile of soil moisture is the most important factor, and that deep soil zone anomalies exert a more powerful, long-lasting effect than do anomalies in the near surface soil zone. We therefore suggest that soil moisture feedbacks may be more important in explaining prolonged
A population-based Habitable Zone perspective
NASA Astrophysics Data System (ADS)
Zsom, Andras
2015-08-01
What can we tell about exoplanet habitability if currently only the stellar properties, planet radius, and the incoming stellar flux are known? The Habitable Zone (HZ) is the region around stars where planets can harbor liquid water on their surfaces. The HZ is traditionally conceived as a sharp region around the star because it is calculated for one planet with specific properties e.g., Earth-like or desert planets , or rocky planets with H2 atmospheres. Such planet-specific approach is limiting because the planets’ atmospheric and geophysical properties, which influence the surface climate and the presence of liquid water, are currently unknown but expected to be diverse.A statistical HZ description is outlined which does not select one specific planet type. Instead the atmospheric and surface properties of exoplanets are treated as random variables and a continuous range of planet scenarios are considered. Various probability density functions are assigned to each observationally unconstrained random variable, and a combination of Monte Carlo sampling and climate modeling is used to generate synthetic exoplanet populations with known surface climates. Then, the properties of the liquid water bearing subpopulation is analyzed.Given our current observational knowledge of small exoplanets, the HZ takes the form of a weakly-constrained but smooth probability function. The model shows that the HZ has an inner edge: it is unlikely that planets receiving two-three times more stellar radiation than Earth can harbor liquid water. But a clear outer edge is not seen: a planet that receives a fraction of Earth's stellar radiation (1-10%) can be habitable, if the greenhouse effect of the atmosphere is strong enough. The main benefit of the population-based approach is that it will be refined over time as new data on exoplanets and their atmospheres become available.
Kellen, David; Klauer, Karl Christoph
2015-07-01
An ongoing discussion in the recognition-memory literature concerns the question of whether recognition judgments reflect a direct mapping of graded memory representations (a notion that is instantiated by signal detection theory) or whether they are mediated by a discrete-state representation with the possibility of complete information loss (a notion that is instantiated by threshold models). These 2 accounts are usually evaluated by comparing their (penalized) fits to receiver operating characteristic data, a procedure that is predicated on substantial auxiliary assumptions, which if violated can invalidate results. We show that the 2 accounts can be compared on the basis of critical tests that invoke only minimal assumptions. Using previously published receiver operating characteristic data, we show that confidence-rating judgments are consistent with a discrete-state account. (PsycINFO Database Record PMID:26120910
Almási, Róbert; Pethö, Gábor; Bölcskei, Kata; Szolcsányi, János
2003-01-01
An increasing-temperature hot plate (ITHP) was introduced to measure the noxious heat threshold (45.3±0.3°C) of unrestrained rats, which was reproducible upon repeated determinations at intervals of 5 or 30 min or 1 day. Morphine, diclofenac and paracetamol caused an elevation of the noxious heat threshold following i.p. pretreatment, the minimum effective doses being 3, 10 and 200 mg kg−1, respectively. Unilateral intraplantar injection of the VR1 receptor agonist resiniferatoxin (RTX, 0.048 nmol) induced a profound drop of heat threshold to the innocuous range with a maximal effect (8–10°C drop) 5 min after RTX administration. This heat allodynia was inhibited by pretreatment with morphine, diclofenac and paracetamol, the minimum effective doses being 1, 1 and 100 mg kg−1 i.p., respectively. The long-term sensory desensitizing effect of RTX was examined by bilateral intraplantar injection (0.048 nmol per paw) which produced, after an initial threshold drop, an elevation (up to 2.9±0.5°C) of heat threshold lasting for 5 days. The VR1 receptor antagonist iodo-resiniferatoxin (I-RTX, 0.05 nmol intraplantarly) inhibited by 51% the heat threshold-lowering effect of intraplantar RTX but not α,β-methylene-ATP (0.3 μmol per paw). I-RTX (0.1 or 1 nmol per paw) failed to alter the heat threshold either acutely (5–60 min) or on the long-term (5 days). The heat threshold of VR1 receptor knockout mice was not different from that of wild-type animals (45.6±0.5 vs 45.2±0.4°C). In conclusion, the RTX-induced drop of heat threshold measured by the ITHP is a novel heat allodynia model exhibiting a high sensitivity to analgesics. PMID:12746222
Dobrovolsky, V.
2014-10-21
Developed in this work is an electrodynamic model of field effect transistor (FET) application for THz/subTHz radiation detection. It is based on solution of the Maxwell equations in the gate dielectric, expression for current in the channel, which takes into account both the drift and diffusion current components, and the equation of current continuity. For the regimes under and above threshold at the strong inversion the response voltage, responsivity, wave impedance, power of ohmic loss in the gate and channel have been found, and the electrical noise equivalent power (ENEP) has been estimated. The responsivity is orders of magnitude higher and ENEP under threshold is orders of magnitude less than these values above threshold. Under the threshold, the electromagnetic field in the gate oxide is identical to field of the plane waves in free-space. At the same time, for strong inversion the charging of the gate capacitance through the resistance of channel determines the electric field in oxide.
Linear No-Threshold model and standards for protection against radiation.
Shamoun, Dima Yazji
2016-06-01
In response to the three petitions by Carol S. Marcus, Mark L. Miller, and Mohan Doss, dated February 9, February 13, and February 24, 2015, respectively, the Nuclear Regulatory Commission (NRC or the Commission) has announced that it is considering assessing its choice of dose-response model, the Linear No-Threshold (LNT) model, for exposure to ionizing radiation. This comment is designed to assist the Commission in evaluating the merits of a review of the default dose-response model it uses as the basis for the Standards for Protection against Radiation regulations. It extends the petitioners' argument in favor of reexamining the default hypothesis (LNT) and taking consideration of low-dose hormesis for two main reasons: 1) Failure to review the LNT hypothesis may jeopardize the NRC's mission to protect public health and safety; and 2) The National Research Council's guidelines for choosing adequate defaults indicate that the choice of low-dose default model is due for a reevaluation. PMID:26924276
2013-01-01
Background Small pneumothoraxes (PTXs) may not impart an immediate threat to trauma patients after chest injuries. However, the amount of pleural air may increase and become a concern for patients who require positive pressure ventilation or air ambulance transport. Lung ultrasonography (US) is a reliable tool in finding intrapleural air, but the performance characteristics regarding the detection of small PTXs need to be defined. The study aimed to define the volume threshold of intrapleural air when PTXs are accurately diagnosed with US and compare this volume with that for chest x-ray (CXR). Methods Air was insufflated into a unilateral pleural catheter in seven incremental steps (10, 25, 50, 100, 200, 350 and 500 mL) in 20 intubated porcine models, followed by a diagnostic evaluation with US and a supine anteroposterior CXR. The sonographers continued the US scanning until the PTXs could be ruled in, based on the pathognomonic US “lung point” sign. The corresponding threshold volume was noted. A senior radiologist interpreted the CXR images. Results The mean threshold volume to confirm the diagnosis of PTX using US was 18 mL (standard deviation of 13 mL). Sixty-five percent of the PTXs were already diagnosed at 10 mL of intrapleural air; 25%, at 25 mL; and the last 10%, at 50 mL. At an air volume of 50 mL, the radiologist only identified four out of 20 PTXs in the CXR pictures; i.e., a sensitivity of 20% (95% CI: 7%, 44%). The sensitivity of CXR increased as a function of volume but leveled off at 67%, leaving one-third (1/3) of the PTXs unidentified after 500 mL of insufflated air. Conclusion Lung US is very accurate in diagnosing even small amounts of intrapleural air and should be performed by clinicians treating chest trauma patients when PTX is among the differential diagnoses. PMID:23453044
Global and local threshold in a metapopulational SEIR model with quarantine
NASA Astrophysics Data System (ADS)
Gomes, Marcelo F. C.; Rossi, Luca; Pastore Y Piontti, Ana; Vespignani, Alessandro
2013-03-01
Diseases which have the possibility of transmission before the onset of symptoms pose a challenging threat to healthcare since it is hard to track spreaders and implement quarantine measures. More precisely, one main concerns regarding pandemic spreading of diseases is the prediction-and eventually control-of local outbreaks that will trigger a global invasion of a particular disease. We present a metapopulation disease spreading model with transmission from both symptomatic and asymptomatic agents and analyze the role of quarantine measures and mobility processes between subpopulations. We show that, depending on the disease parameters, it is possible to separate in the parameter space the local and global thresholds and study the system behavior as a function of the fraction of asymptomatic transmissions. This means that it is possible to have a range of parameters values where although we do not achieve local control of the outbreak it is possible to control the global spread of the disease. We validate the analytic picture in data-driven model that integrates commuting, air traffic flow and detailed information about population size and structure worldwide. Laboratory for the Modeling of Biological and Socio-Technical Systems (MoBS)
Centrifuge model study of thresholds for rainfall-induced landslides in sandy slopes
NASA Astrophysics Data System (ADS)
Matziaris, V.; Marshall, A. M.; Heron, C. M.; Yu, H.-S.
2015-09-01
Rainfall-induced landslides are very common natural disasters which cause damage to properties and infrastructure and may result in the loss of human life. These phenomena often take place in unsaturated soil slopes and are triggered by the saturation of the soil profile due to rain infiltration which leads to the decrease of effective stresses and loss of shear strength. The aim of this study is to determine rainfall thresholds for the initiation of landslides under different initial conditions. Model tests of rainfall-induced landslides were conducted on the Nottingham Centre for Geomechanics geotechnical centrifuge. Initially unsaturated plane-strain slope models made with fine silica sand were prepared at varying densities at 1g and accommodated within a centrifuge container with rainfall simulator. During the centrifuge flight at 60g, rainfall events of varying intensity and duration, as well as variation of groundwater conditions, were applied to the slope models with the aim of initiating slope failure. This paper presents a discussion on the impact of soil state properties, rainfall characteristics, and groundwater conditions on slope behaviour and the initiation of slope instability.
Marker-based monitoring of seated spinal posture using a calibrated single-variable threshold model.
Walsh, Pauline; Dunne, Lucy E; Caulfield, Brian; Smyth, Barry
2006-01-01
This work, as part of a larger project developing wearable posture monitors for the work environment, seeks to monitor and model seated posture during computer use. A non-wearable marker-based optoelectronic motion capture system was used to monitor seated posture for ten healthy subjects during a calibration exercise and a typing task. Machine learning techniques were used to select overall spinal sagittal flexion as the best indicator of posture from a set of marker and vector variables. Overall flexion data from the calibration exercise were used to define a threshold model designed to classify posture for each subject, which was then applied to the typing task data. Results of the model were analysed visually by qualified physiotherapists with experience in ergonomics and posture analysis to confirm the accuracy of the calibration. The calibration formula was found to be accurate on 100% subjects. This process will be used as a comparative measure in the evaluation of several wearable posture sensors, and to inform the design of the wearable system. PMID:17946301
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
Luszczki, Jarogniew J; Glowniak, Kazimierz; Czuczwar, Stanislaw J
2007-09-01
This study was designed to evaluate the anticonvulsant effects of imperatorin (a furanocoumarin isolated from fruits of Angelica archangelica) in the mouse maximal electroshock seizure threshold model. The threshold for electroconvulsions in mice was determined at several times: 15, 30, 60 and 120 min after i.p. administration of imperatorin at increasing doses of 10, 20, 30, 40, 50 and 100 mg/kg. The evaluation of time-course relationship for imperatorin in the maximal electroshock seizure threshold test revealed that the agent produced its maximum antielectroshock action at 30 min after its i.p. administration. In this case, imperatorin at doses of 50 and 100 mg/kg significantly raised the threshold for electroconvulsions in mice by 38 and 68% (P<0.05 and P<0.001), respectively. The antiseizure effects produced by imperatorin at 15, 60 and 120 min after its systemic (i.p.) administration were less expressed than those observed for imperatorin injected 30 min before the maximal electroshock seizure threshold test. Based on this study, one can conclude that imperatorin produces the anticonvulsant effect in the maximal electroshock seizure threshold test in a dose-dependent manner. PMID:17602770
Luo, Guanzhong
2005-01-01
There is a perception in the literature that the Rating Scale Model (RSM) and Partial Credit Model (PCM) are two different types of Rasch models. This paper clarifies the relationship between the RSM and PCM from the perspectives of literature history and mathematical logic. It is shown that not only are the RSM and the PCM identical, but the two approaches used to introduce them are statistically equivalent. Then the implication of disordered thresholds is discussed. In addition, the difference between the structural thresholds and the Thurstone thresholds are clarified. PMID:16192666
Solving Cordelia's Dilemma: Threshold Concepts within a Punctuated Model of Learning
ERIC Educational Resources Information Center
Kinchin, Ian M.
2010-01-01
The consideration of threshold concepts is offered in the context of biological education as a theoretical framework that may have utility in the teaching and learning of biology at all levels. Threshold concepts may provide a mechanism to explain the observed punctuated nature of conceptual change. This perspective raises the profile of periods…
A study of jet fuel sooting tendency using the threshold sooting index (TSI) model
Yang, Yi; Boehman, Andre L.; Santoro, Robert J.
2007-04-15
Fuel composition can have a significant effect on soot formation during gas turbine combustion. Consequently, this paper contains a comprehensive review of the relationship between fuel hydrocarbon composition and soot formation in gas turbine combustors. Two levels of correlation are identified. First, lumped fuel composition parameters such as hydrogen content and smoke point, which are conventionally used to represent fuel sooting tendency, are correlated with soot formation in practical combustors. Second, detailed fuel hydrocarbon composition is correlated with these lumped parameters. The two-level correlation makes it possible to predict soot formation in practical combustors from basic fuel composition data. Threshold sooting index (TSI), which correlates linearly with the ratio of fuel molecular weight and smoke point in a diffusion flame, is proposed as a new lumped parameter for sooting tendency correlation. It is found that the TSI model correlates excellently with hydrocarbon compositions over a wide range of fuel samples. Also, in predicting soot formation in actual combustors, the TSI model produces the best results overall in comparison with other previously reported correlating parameters, including hydrogen content, smoke point, and composite predictors containing more than one parameter. (author)
Burgener, Sabrina S; Baumann, Mathias; Basilico, Paola; Remold-O'Donnell, Eileen; Touw, Ivo P; Benarafa, Charaf
2016-09-01
Serpinb1 is an inhibitor of neutrophil granule serine proteases cathepsin G, proteinase-3 and elastase. One of its core physiological functions is to protect neutrophils from granule protease-mediated cell death. Mice lacking Serpinb1a (Sb1a-/-), its mouse ortholog, have reduced bone marrow neutrophil numbers due to cell death mediated by cathepsin G and the mice show increased susceptibility to lung infections. Here, we show that conditional deletion of Serpinb1a using the Lyz2-cre and Cebpa-cre knock-in mice effectively leads to recombination-mediated deletion in neutrophils but protein-null neutrophils were only obtained using the latter recombinase-expressing strain. Absence of Serpinb1a protein in neutrophils caused neutropenia and increased granule permeabilization-induced cell death. We then generated transgenic mice expressing human Serpinb1 in neutrophils under the human MRP8 (S100A8) promoter. Serpinb1a expression levels in founder lines correlated positively with increased neutrophil survival when crossed with Sb1a-/- mice, which had their defective neutrophil phenotype rescued in the higher expressing transgenic line. Using new conditional and transgenic mouse models, our study demonstrates the presence of a relatively low Serpinb1a protein threshold in neutrophils that is required for sustained survival. These models will also be helpful in delineating recently described functions of Serpinb1 in metabolism and cancer. PMID:27107834
NASA Astrophysics Data System (ADS)
Mazas, Franck; Hamm, Luc; Kergadallan, Xavier
2013-04-01
In France, the storm Xynthia of February 27-28th, 2010 reminded engineers and stakeholders of the necessity for an accurate estimation of extreme sea levels for the risk assessment in coastal areas. Traditionally, two main approaches exist for the statistical extrapolation of extreme sea levels: the direct approach performs a direct extrapolation on the sea level data, while the indirect approach carries out a separate analysis of the deterministic component (astronomical tide) and stochastic component (meteorological residual, or surge). When the tidal component is large compared with the surge one, the latter approach is known to perform better. In this approach, the statistical extrapolation is performed on the surge component then the distribution of extreme seal levels is obtained by convolution of the tide and surge distributions. This model is often referred to as the Joint Probability Method. Different models from the univariate extreme theory have been applied in the past for extrapolating extreme surges, in particular the Annual Maxima Method (AMM) and the r-largest method. In this presentation, we apply the Peaks-Over-Threshold (POT) approach for declustering extreme surge events, coupled with the Poisson-GPD model for fitting extreme surge peaks. This methodology allows a sound estimation of both lower and upper tails of the stochastic distribution, including the estimation of the uncertainties associated to the fit by computing the confidence intervals. After convolution with the tide signal, the model yields the distribution for the whole range of possible sea level values. Particular attention is paid to the necessary distinction between sea level values observed at a regular time step, such as hourly, and sea level events, such as those occurring during a storm. Extremal indexes for both surges and levels are thus introduced. This methodology will be illustrated with a case study at Brest, France.
T Lymphocyte Activation Threshold and Membrane Reorganization Perturbations in Unique Culture Model
NASA Technical Reports Server (NTRS)
Adams, C. L.; Sams, C. F.
2000-01-01
Quantitative activation thresholds and cellular membrane reorganization are mechanisms by which resting T cells modulate their response to activating stimuli. Here we demonstrate perturbations of these cellular processes in a unique culture system that non-invasively inhibits T lymphocyte activation. During clinorotation, the T cell activation threshold is increased 5-fold. This increased threshold involves a mechanism independent of TCR triggering. Recruitment of lipid rafts to the activation site is impaired during clinorotation but does occur with increased stimulation. This study describes a situation in which an individual cell senses a change in its physical environment and alters its cell biological behavior.
NASA Astrophysics Data System (ADS)
Tsai, F.; Lai, J. S.; Chiang, S. H.
2015-12-01
Landslides are frequently triggered by typhoons and earthquakes in Taiwan, causing serious economic losses and human casualties. Remotely sensed images and geo-spatial data consisting of land-cover and environmental information have been widely used for producing landslide inventories and causative factors for slope stability analysis. Landslide susceptibility, on the other hand, can represent the spatial likelihood of landslide occurrence and is an important basis for landslide risk assessment. As multi-temporal satellite images become popular and affordable, they are commonly used to generate landslide inventories for subsequent analysis. However, it is usually difficult to distinguish different landslide sub-regions (scarp, debris flow, deposition etc.) directly from remote sensing imagery. Consequently, the extracted landslide extents using image-based visual interpretation and automatic detections may contain many depositions that may reduce the fidelity of the landslide susceptibility model. This study developed an empirical thresholding scheme based on terrain characteristics for eliminating depositions from detected landslide areas to improve landslide susceptibility modeling. In this study, Bayesian network classifier is utilized to build a landslide susceptibility model and to predict sequent rainfall-induced shallow landslides in the Shimen reservoir watershed located in northern Taiwan. Eleven causative factors are considered, including terrain slope, aspect, curvature, elevation, geology, land-use, NDVI, soil, distance to fault, river and road. Landslide areas detected using satellite images acquired before and after eight typhoons between 2004 to 2008 are collected as the main inventory for training and verification. In the analysis, previous landslide events are used as training data to predict the samples of the next event. The results are then compared with recorded landslide areas in the inventory to evaluate the accuracy. Experimental results
Analytic Model for Description of Above-Threshold Ionization by an Intense, Short Laser Pulse
NASA Astrophysics Data System (ADS)
Starace, Anthony F.; Frolov, M. V.; Knyazeva, D. V.; Manakov, N. L.; Geng, J.-W.; Peng, L.-Y.
2015-05-01
We present an analytic model for above-threshold ionization (ATI) of an atom by an intense, linearly-polarized short laser pulse. Our quantum analysis provides closed-form formulas for the differential probability of ATI, with amplitudes given by a coherent sum of partial amplitudes describing ionization by neighboring optical cycles near the peak of the intensity envelope of a short laser pulse. These analytic results explain key features of short-pulse ATI spectra, such as the left-right asymmetry in the ionized electron angular distribution, the multi-plateau structures, and both large-scale and fine-scale oscillation patterns resulting from quantum interferences of electron trajectories. The ATI spectrum in the middle part of the ATI plateau is shown to be sensitive to the spatial symmetry of the initial bound state of the active electron owing to contributions from multiple-return electron trajectories. An extension of our analytic formulas to real atoms provides results that are in good agreement with results of numerical solutions of the time-dependent Schrödinger equation for He and Ar atoms. Research supported in part by NSF Grant No. PHY-1208059, by RFBR Grant No. 13-02-00420, by Ministry of Ed. & Sci. of the Russian Fed. Proj. No. 1019, by NNSFC Grant Nos. 11322437, 11174016, and 11121091, and by the Dynasty Fdn. (MVF & DVK).
High-precision percolation thresholds and Potts-model critical manifolds from graph polynomials
NASA Astrophysics Data System (ADS)
>Jesper Lykke Jacobsen,
2014-04-01
The critical curves of the q-state Potts model can be determined exactly for regular two-dimensional lattices G that are of the three-terminal type. This comprises the square, triangular, hexagonal and bow-tie lattices. Jacobsen and Scullard have defined a graph polynomial PB(q, v) that gives access to the critical manifold for general lattices. It depends on a finite repeating part of the lattice, called the basis B, and its real roots in the temperature variable v = eK - 1 provide increasingly accurate approximations to the critical manifolds upon increasing the size of B. Using transfer matrix techniques, these authors computed PB(q, v) for large bases (up to 243 edges), obtaining determinations of the ferromagnetic critical point vc > 0 for the (4, 82), kagome, and (3, 122) lattices to a precision (of the order 10-8) slightly superior to that of the best available Monte Carlo simulations. In this paper we describe a more efficient transfer matrix approach to the computation of PB(q, v) that relies on a formulation within the periodic Temperley-Lieb algebra. This makes possible computations for substantially larger bases (up to 882 edges), and the precision on vc is hence taken to the range 10-13. We further show that a large variety of regular lattices can be cast in a form suitable for this approach. This includes all Archimedean lattices, their duals and their medials. For all these lattices we tabulate high-precision estimates of the bond percolation thresholds pc and Potts critical points vc. We also trace and discuss the full Potts critical manifold in the (q, v) plane, paying special attention to the antiferromagnetic region v < 0. Finally, we adapt the technique to site percolation as well, and compute the polynomials PB(p) for certain Archimedean and dual lattices (those having only cubic and quartic vertices), using very large bases (up to 243 vertices). This produces the site percolation thresholds pc to a precision of the order of 10-9.
Soyka, Florian; Giordano, Paolo Robuffo; Barnett-Cowan, Michael; Bülthoff, Heinrich H
2012-07-01
Understanding the dynamics of vestibular perception is important, for example, for improving the realism of motion simulation and virtual reality environments or for diagnosing patients suffering from vestibular problems. Previous research has found a dependence of direction discrimination thresholds for rotational motions on the period length (inverse frequency) of a transient (single cycle) sinusoidal acceleration stimulus. However, self-motion is seldom purely sinusoidal, and up to now, no models have been proposed that take into account non-sinusoidal stimuli for rotational motions. In this work, the influence of both the period length and the specific time course of an inertial stimulus is investigated. Thresholds for three acceleration profile shapes (triangular, sinusoidal, and trapezoidal) were measured for three period lengths (0.3, 1.4, and 6.7 s) in ten participants. A two-alternative forced-choice discrimination task was used where participants had to judge if a yaw rotation around an earth-vertical axis was leftward or rightward. The peak velocity of the stimulus was varied, and the threshold was defined as the stimulus yielding 75 % correct answers. In accordance with previous research, thresholds decreased with shortening period length (from ~2 deg/s for 6.7 s to ~0.8 deg/s for 0.3 s). The peak velocity was the determining factor for discrimination: Different profiles with the same period length have similar velocity thresholds. These measurements were used to fit a novel model based on a description of the firing rate of semi-circular canal neurons. In accordance with previous research, the estimates of the model parameters suggest that velocity storage does not influence perceptual thresholds. PMID:22623095
Transfer model of lead in soil-carrot (Daucus carota L.) system and food safety thresholds in soil.
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang
2015-09-01
Reliable empirical models describing lead (Pb) transfer in soil-plant systems are needed to improve soil environmental quality standards. A greenhouse experiment was conducted to develop soil-plant transfer models to predict Pb concentrations in carrot (Daucus carota L.). Soil thresholds for food safety were then derived inversely using the prediction model in view of the maximum allowable limit for Pb in food. The 2 most important soil properties that influenced carrot Pb uptake factor (ratio of Pb concentration in carrot to that in soil) were soil pH and cation exchange capacity (CEC), as revealed by path analysis. Stepwise multiple linear regression models were based on soil properties and the pseudo total (aqua regia) or extractable (0.01 M CaCl2 and 0.005 M diethylenetriamine pentaacetic acid) soil Pb concentrations. Carrot Pb contents were best explained by the pseudo total soil Pb concentrations in combination with soil pH and CEC, with the percentage of variation explained being up to 93%. The derived soil thresholds based on added Pb (total soil Pb with the geogenic background part subtracted) have the advantage of better applicability to soils with high natural background Pb levels. Validation of the thresholds against data from field trials and literature studies indicated that the proposed thresholds are reasonable and reliable. PMID:25904232
Technology Transfer Automated Retrieval System (TEKTRAN)
(Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...
ERIC Educational Resources Information Center
Popp, Sharon Osborn; Behrens, John T.; Ryan, Joseph M.; Hess, Robert K.
The Rasch model for ordered categories was applied to responses on a science attitude survey that uses a combined semantic differential and Likert-type scale format. Data were drawn from the Views about Science Survey for 1,300 high school students. Examination of category response function graphs and threshold estimates allowed classification of…
ERIC Educational Resources Information Center
Chen, Tina; Starns, Jeffrey J.; Rotello, Caren M.
2015-01-01
The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are…
Predicting Bed Grain Size in Threshold Channels Using Lidar Digital Elevation Models
NASA Astrophysics Data System (ADS)
Snyder, N. P.; Nesheim, A. O.; Wilkins, B. C.; Edmonds, D. A.
2011-12-01
Over the past 20 years, researchers have developed GIS-based algorithms to extract channel networks and measure longitudinal profiles from digital elevation models (DEMs), and have used these to study stream morphology in relation to tectonics, climate and ecology. The accuracy of stream elevations from traditional DEMs (10-50 m pixels) is typically limited by the contour interval (3-20 m) of the rasterized topographic map source. This is a particularly severe limitation in low-relief watersheds, where 3 m of channel elevation change may occur over several km. Lidar DEMs (~1 m pixels) allow researchers to resolve channel elevation changes of ~0.5 m, enabling reach-scale calculations of gradient, which is the most important parameter for understanding channel processes at that scale. Lidar DEMs have the additional advantage of allowing users to make estimates of channel width. We present a process-based model that predicts median bed grain size in threshold gravel-bed channels from lidar slope and width measurements using the Shields and Manning equations. We compare these predictions to field grain size measurements in segments of three Maine rivers. Like many paraglacial rivers, these have longitudinal profiles characterized by relatively steep (gradient >0.002) and flat (gradient <0.0005) segments, with length scales of several km. This heterogeneity corresponds to strong variations in channel form, sediment supply, bed grain size, and aquatic habitat characteristics. The model correctly predicts bed sediment size within a factor of two in ~70% of the study sites. The model works best in single-thread channels with relatively low sediment supply, and poorly in depositional, multi-thread and/or fine (median grain size <20 mm) reaches. We evaluate the river morphology (using field and lidar measurements) in the context of the Parker et al. (2007) hydraulic geometry relations for single-thread gravel-bed rivers, and find correspondence in the locations where both
Töllner, Kathrin; Twele, Friederike; Löscher, Wolfgang
2016-04-01
Resistance to antiepileptic drugs (AEDs) is a major problem in epilepsy therapy, so that development of more effective AEDs is an unmet clinical need. Several rat and mouse models of epilepsy with spontaneous difficult-to-treat seizures exist, but because testing of antiseizure drug efficacy is extremely laborious in such models, they are only rarely used in the development of novel AEDs. Recently, the use of acute seizure tests in epileptic rats or mice has been proposed as a novel strategy for evaluating novel AEDs for increased antiseizure efficacy. In the present study, we compared the effects of five AEDs (valproate, phenobarbital, diazepam, lamotrigine, levetiracetam) on the pentylenetetrazole (PTZ) seizure threshold in mice that were made epileptic by pilocarpine. Experiments were started 6 weeks after a pilocarpine-induced status epilepticus. At this time, control seizure threshold was significantly lower in epileptic than in nonepileptic animals. Unexpectedly, only one AED (valproate) was less effective to increase seizure threshold in epileptic vs. nonepileptic mice, and this difference was restricted to doses of 200 and 300 mg/kg, whereas the difference disappeared at 400mg/kg. All other AEDs exerted similar seizure threshold increases in epileptic and nonepileptic mice. Thus, induction of acute seizures with PTZ in mice pretreated with pilocarpine does not provide an effective and valuable surrogate method to screen drugs for antiseizure efficacy in a model of difficult-to-treat chronic epilepsy as previously suggested from experiments with this approach in rats. PMID:26930359
Maudlin, P.J.; Davidson, R.F.; Henninger, R.J.
1990-09-01
A flow-stress constitutive model based on dislocation mechanics has been implemented in the EPIC2 and PINON continuum mechanics modes. This model provides a better understanding of the plastic deformation process for ductile materials by using an internal state variable called the mechanical threshold stress. This kinematic quantity tracks the evolution of the material's microstructure along some arbitrary strain, strain-rate, and temperature-dependent path using a differential form that balances dislocation generation and recovery processes. Given a value for the mechanical threshold stress, the flow stress is determined using either a thermal-activation-controlled or a drag-controlled kinetics relationship. We evaluated the performance of the Mechanical Threshold Stress (MTS) model in terms of accuracy and computational resources through a series of assessment problems chosen to exercise the model over a large range of strain rates and strains. Our calculations indicate that the more complicated MTS model is reasonable in terms of computational resources when compared with other models in common hydrocode use. In terms of accuracy, these simulations show that the MTS model is superior for problems containing mostly normal strain with shear strains less than 0.2 but perhaps not as accurate for problems that contain large amounts of shear strain. 29 refs., 33 figs., 9 tabs.
Guerra, J L L; Franke, D E; Blouin, D C
2006-12-01
Generalized mixed linear, threshold, and logistic sire models and Markov chain, Monte Carlo simulation procedures were used to estimate genetic parameters for calving rate and calf survival in a multibreed beef cattle population. Data were obtained from a 5-generation rotational crossbreeding study involving Angus, Brahman, Charolais, and Hereford (1969 to 1995). Gelbvieh and Simmental bulls sired terminal-cross calves from a sample of generation 5 cows. A total of 1,458 cows sired by 158 bulls had a mean calving rate of 78% based on 4,808 calving records. Ninety-one percent of 5,015 calves sired by 260 bulls survived to weaning. Mean heritability estimates and standard deviations for daughter calving rate from posterior distributions were 0.063 +/- 0.024, 0.150 +/- 0.049, and 0.130 +/- 0.047 for linear, threshold, and logistic models, respectively. For calf survival, mean heritability estimates and standard deviations from posterior distributions were 0.049 +/- 0.022, 0.160 +/- 0.058, and 0.190 +/- 0.078 from linear, threshold, and logistic models, respectively. When transformed to an underlying normal scale, linear sire, mixed model, heritability estimates were similar to threshold and logistic sire mixed model estimates. Posterior density distributions of estimated heritabilities from all models were normal. Spearman rank correlations between sire EPD across statistical models were greater than 0.97 for daughter calving rate and for calf survival. Sire EPD had similar ranges across statistical models for daughter calving rate and for calf survival. PMID:17093211
NASA Technical Reports Server (NTRS)
King, James; Nickling, William G.; Gillies, John A.
2005-01-01
The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.
Shinomori, Keizo; Panorgias, Athanasios; Werner, John S
2016-03-01
Age-related changes in chromatic discrimination along dichromatic confusion lines were measured with the Cambridge Colour Test (CCT). One hundred and sixty-two individuals (16 to 88 years old) with normal Rayleigh matches were the major focus of this paper. An additional 32 anomalous trichromats classified by their Rayleigh matches were also tested. All subjects were screened to rule out abnormalities of the anterior and posterior segments. Thresholds on all three chromatic vectors measured with the CCT showed age-related increases. Protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bilinear model. Analysis and modeling demonstrated that the nominal vectors of the CCT are shifted by senescent changes in ocular media density, and a method for correcting the CCT vectors is demonstrated. A correction for these shifts indicates that classification among individuals of different ages is unaffected. New vector thresholds for elderly observers and for all age groups are suggested based on calculated tolerance limits. PMID:26974943
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; McDonnell, J. J.; Porporato, A. M.
2013-12-01
Several components of ecohydrological systems are characterized by an interplay of stochastic inputs, finite capacity storage, and nonlinear, threshold-like losses, resulting in a complex partitioning of the rainfall input between the different basin scales. With the goal of more accurate predictions of rainfall partitioning and threshold effects in ecohydrology, we examine ecohydrological processes at the various scales, including canopy interception, soil storage with runoff/percolation, hillslope filling-spilling mechanisms, and the related groundwater recharge and baseflow contribution to streamflow. We apply a probabilistic approach to a hierarchical arrangement of cascading reservoirs that are representative of the components of the basin system. The analytical results of this framework help single out the key parameters controlling the partitioning of rainfall within the storage compartments of river basins. This theoretical framework is a useful learning tool for exploring the physical meaning of known thresholds in ecohydrology.
Zemek, Allison; Garg, Rohit; Wong, Brian J. F.
2014-01-01
Objectives/Hypothesis Characterizing the mechanical properties of structural cartilage grafts used in rhinoplasty is valuable because softer engineered tissues are more time- and cost-efficient to manufacture. The aim of this study is to quantitatively identify the threshold mechanical stability (e.g., Young’s modulus) of columellar, L-strut, and alar cartilage replacement grafts. Study Design Descriptive, focus group survey. Methods Ten mechanical phantoms of identical size (5 × 20 × 2.3 mm) and varying stiffness (0.360 to 0.85 MPa in 0.05 MPa increments) were made from urethane. A focus group of experienced rhinoplasty surgeons (n = 25, 5 to 30 years in practice) were asked to arrange the phantoms in order of increasing stiffness. Then, they were asked to identify the minimum acceptable stiffness that would still result in favorable surgical outcomes for three clinical applications: columellar, L-strut, and lateral crural replacement grafts. Available surgeons were tested again after 1 week to evaluate intra-rater consistency. Results For each surgeon, the threshold stiffness for each clinical application differed from the threshold values derived by logistic regression by no more than 0.05 MPa (accuracy to within 10%). Specific thresholds were 0.56, 0.59, and 0.49 MPa for columellar, L-strut, and alar grafts, respectively. For comparison, human nasal septal cartilage is approximately 0.8 MPa. Conclusions There was little inter- and intra-rater variation of the identified threshold values for adequate graft stiffness. The identified threshold values will be useful for the design of tissue-engineered or semisynthetic cartilage grafts for use in structural nasal surgery. PMID:20513022
ERIC Educational Resources Information Center
Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.
2009-01-01
Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…
NASA Astrophysics Data System (ADS)
Harter, Andrew K.; Lee, Tony E.; Joglekar, Yogesh N.
2016-06-01
Aubry-André-Harper lattice models, characterized by a reflection-asymmetric sinusoidally varying nearest-neighbor tunneling profile, are well known for their topological properties. We consider the fate of such models in the presence of balanced gain and loss potentials ±i γ located at reflection-symmetric sites. We predict that these models have a finite PT -breaking threshold only for specific locations of the gain-loss potential and uncover a hidden symmetry that is instrumental to the finite threshold strength. We also show that the topological edge states remain robust in the PT -symmetry-broken phase. Our predictions substantially broaden the possible experimental realizations of a PT -symmetric system.
Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.
2015-04-28
Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.
Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.
2015-04-28
Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique valuemore » but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.« less
Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W.
2015-04-28
Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.
Analytic model for the description of above-threshold ionization by an intense short laser pulse
NASA Astrophysics Data System (ADS)
Frolov, M. V.; Knyazeva, D. V.; Manakov, N. L.; Geng, Ji-Wei; Peng, Liang-You; Starace, Anthony F.
2014-06-01
We present an analytic model for the description of above-threshold ionization (ATI) of an atom by an intense, linearly polarized short laser pulse. Our treatment is based upon a description of ATI by an infinitely long train of short laser pulses whereupon we take the limit that the time interval between pulses becomes infinite. In the quasiclassical approximation, we provide detailed quantum-mechanical derivations, within the time-dependent effective range (TDER) model, of the closed-form formulas for the differential probability P (p) of ATI by an intense, short laser pulse that were presented briefly by Frolov et al. [Phys. Rev. Lett. 108, 213002 (2012), 10.1103/PhysRevLett.108.213002] and that were used to describe key features of the high-energy part of ATI spectra for H and He atoms in an intense, few-cycle laser pulse, using a phenomenological generalization of the physically transparent TDER results to the case of real atoms. Moreover, we extend these results here to the case of an electron bound initially in a p state; we also take into account multiple-return electron trajectories. The ATI amplitude in our approach is given by a coherent sum of partial amplitudes describing ionization by neighboring optical cycles near the peak of the intensity envelope of a short laser pulse. These results provide an analytical explanation of key features in short-pulse ATI spectra, such as the left-right asymmetry in the ionized electron angular distribution, the multiplateau structures, and both large-scale and fine-scale oscillation patterns resulting from quantum interferences of electron trajectories. Our results show that the shape of the ATI spectrum in the middle part of the ATI plateau is sensitive to the spatial symmetry of the initial bound state of the active electron. This sensitivity originates from the contributions of multiple-return electron trajectories. Our analytic results are shown to be in good agreement with results of numerical solutions of the
A Multinomial Model for Identifying Significant Pure-Tone Threshold Shifts
ERIC Educational Resources Information Center
Schlauch, Robert S.; Carney, Edward
2007-01-01
Purpose: Significant threshold differences on retest for pure-tone audiometry are often evaluated by application of ad hoc rules, such as a shift in a pure-tone average or in 2 adjacent frequencies that exceeds a predefined amount. Rules that are so derived do not consider the probability of observing a particular audiogram. Methods: A general…
This study will provide a general methodology for integrating threshold information from multiple species ecological metrics, allow for prediction of changes of alternative stable states, and provide a risk assessment tool that can be applied to adaptive management. The integr...
"Getting Stuck" in Analogue Electronics: Threshold Concepts as an Explanatory Model
ERIC Educational Resources Information Center
Harlow, A.; Scott, J.; Peter, M.; Cowie, B.
2011-01-01
Could the challenge of mastering threshold concepts be a potential factor that influences a student's decision to continue in electronics engineering? This was the question that led to a collaborative research project between educational researchers and the Faculty of Engineering in a New Zealand university. This paper deals exclusively with the…
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.
1985-01-01
Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.
Evolution of the Stokes Wave Side-Band Instability, threshold modification of Tulin NLS Model
NASA Astrophysics Data System (ADS)
Shugan, Igor; Hwung, Hwung-Hweng; Yang, Ray-Yeng
2010-05-01
of the wave train evolution: wave instability, the side band asymmetry and wave breaking effects. On the other hand, continuous wave breaking dissipation presumed in the model gives significantly overestimated values of wave attenuation on the latter stages of wave propagation and can not describe the wave modulation and restabilization at sufficiently long distances of propagation. The adjusted dissipative model based on the Nonlinear SchrÖdinger Equation is suggested for adequate description the obtained experimental data. Sink/Source terms due to wave breaking processes in its right side correspond to well-known Tulin (1996) model. The wave dissipation function includes the wave steepness threshold function and applied only in the regions with active wave breaking. Permanent frequencies downshift as a result of wave breaking process and post-breaking wave modulations described by the model have the satisfactory quantitative correspondence to results of experiments conducted along a super tank.
A Threshold Shear Force for Calcium Influx in an Astrocyte Model of Traumatic Brain Injury
Maneshi, Mohammad Mehdi; Sachs, Frederick
2015-01-01
Abstract Traumatic brain injury (TBI) refers to brain damage resulting from external mechanical force, such as a blast or crash. Our current understanding of TBI is derived mainly from in vivo studies that show measurable biological effects on neurons sampled after TBI. Little is known about the early responses of brain cells during stimuli and which features of the stimulus are most critical to cell injury. We generated defined shear stress in a microfluidic chamber using a fast pressure servo and examined the intracellular Ca2+ levels in cultured adult astrocytes. Shear stress increased intracellular Ca2+ depending on the magnitude, duration, and rise time of the stimulus. Square pulses with a fast rise time (∼2 ms) caused transient increases in intracellular Ca2+, but when the rise time was extended to 20 ms, the response was much less. The threshold for a response is a matrix of multiple parameters. Cells can integrate the effect of shear force from repeated challenges: A pulse train of 10 narrow pulses (11.5 dyn/cm2 and 10 ms wide) resulted in a 4-fold increase in Ca2+ relative to a single pulse of the same amplitude 100 ms wide. The Ca2+ increase was eliminated in Ca2+-free media, but was observed after depleting the intracellular Ca2+ stores with thapsigargin suggesting the need for a Ca2+ influx. The Ca2+ influx was inhibited by extracellular Gd3+, a nonspecific inhibitor of mechanosensitive ion channels, but it was not affected by the more specific inhibitor, GsMTx4. The voltage-gated channel blockers, nifedipine, diltiazem, and verapamil, were also ineffective. The data show that the mechanically induced Ca2+ influx commonly associated with neuron models for TBI is also present in astrocytes, and there is a viscoelastic/plastic coupling of shear stress to the Ca2+ influx. The site of Ca2+ influx has yet to be determined. PMID:25442327
NASA Astrophysics Data System (ADS)
Chiang, T. K.; Chen, M. L.
2007-03-01
Based on the fully two-dimensional (2D) Poisson's solution in both silicon film and insulator layer, a compact and analytical threshold voltage model, which accounts for the fringing field effect of the short channel symmetrical double-gate (SDG) MOSFETs, has been developed. Exploiting the new model, a concerned analysis combining FIBL-enhanced short-channel effects and high- k gate dielectrics assess their overall impact on SDG MOSFET's scaling. It is found that for the same equivalent oxide thickness, the gate insulator with high- k dielectric constant which keeps a great characteristic length allows less design space than SiO 2 to sustain the same FIBL induced threshold voltage degradation.
Bajaj, Sanyam Hung, Ting-Hsiang; Akyol, Fatih; Nath, Digbijoy; Rajan, Siddharth
2014-12-29
We report on the potential of high electron mobility transistors (HEMTs) consisting of high composition AlGaN channel and barrier layers for power switching applications. Detailed two-dimensional (2D) simulations show that threshold voltages in excess of 3 V can be achieved through the use of AlGaN channel layers. We also calculate the 2D electron gas mobility in AlGaN channel HEMTs and evaluate their power figures of merit as a function of device operating temperature and Al mole fraction in the channel. Our models show that power switching transistors with AlGaN channels would have comparable on-resistance to GaN-channel based transistors for the same operation voltage. The modeling in this paper shows the potential of high composition AlGaN as a channel material for future high threshold enhancement mode transistors.
Martínez, Brezo; Arenas, Francisco; Trilla, Alba; Viejo, Rosa M; Carreño, Francisco
2015-04-01
Species distribution models (SDM) are a useful tool for predicting species range shifts in response to global warming. However, they do not explore the mechanisms underlying biological processes, making it difficult to predict shifts outside the environmental gradient where the model was trained. In this study, we combine correlative SDMs and knowledge on physiological limits to provide more robust predictions. The thermal thresholds obtained in growth and survival experiments were used as proxies of the fundamental niches of two foundational marine macrophytes. The geographic projections of these species' distributions obtained using these thresholds and existing SDMs were similar in areas where the species are either absent-rare or frequent and where their potential and realized niches match, reaching consensus predictions. The cold-temperate foundational seaweed Himanthalia elongata was predicted to become extinct at its southern limit in northern Spain in response to global warming, whereas the occupancy of southern-lusitanic Bifurcaria bifurcata was expected to increase. Combined approaches such as this one may also highlight geographic areas where models disagree potentially due to biotic factors. Physiological thresholds alone tended to over-predict species prevalence, as they cannot identify absences in climatic conditions within the species' range of physiological tolerance or at the optima. Although SDMs tended to have higher sensitivity than threshold models, they may include regressions that do not reflect causal mechanisms, constraining their predictive power. We present a simple example of how combining correlative and mechanistic knowledge provides a rapid way to gain insight into a species' niche resulting in consistent predictions and highlighting potential sources of uncertainty in forecasted responses to climate change. PMID:24917488
Threshold analysis of the susceptible-infected-susceptible model on overlay networks
NASA Astrophysics Data System (ADS)
Wu, Qingchu; Zhang, Haifeng; Small, Michael; Fu, Xinchu
2014-07-01
In this paper, we study epidemic spreading on overlay networks in which n multiple sets of links interconnect among the same nodes. By using the microscopic Markov-chain approximation (MMA) approach, we establish the conditions of epidemic outbreak for two kinds of spreading mechanisms in such an overlay network: the concatenation case and the switching case. When a uniform infection rate is set in all the subnetworks, we find the epidemic threshold for the switching case is just n times as large as that of concatenation case. We also find that the overlay network with a uniform infection rate can be considered as an equivalent (in the sense of epidemic dynamics and epidemic threshold) weighted network. To be specific, the concatenation case corresponds to the integer weighted network, while the switching case corresponds to the fractional weighted network. Interestingly, the time-varying unweighted network can be mapped into the static weighted network. Our analytic results exhibit good agreement with numerical simulations.
NASA Astrophysics Data System (ADS)
Kirk-lawlor, N. E.; Edwards, E. C.
2012-12-01
In many groundwater systems, the height of the water table must be above certain thresholds for some types of surface flow to exist. Examples of flows that depend on water table elevation include groundwater baseflow to river systems, groundwater flow to wetland systems, and flow to springs. Meeting many of the goals of sustainable water resource management requires maintaining these flows at certain rates. Water resource management decisions invariably involve weighing tradeoffs between different possible usage regimes and the economic consequences of potential management choices are an important factor in these tradeoffs. Policies based on sustainability may have a social cost from forgoing present income. This loss of income may be worth bearing, but should be well understood and carefully considered. Traditionally, the economic theory of groundwater exploitation has relied on the assumption of a single-cell or "bathtub" aquifer model, which offers a simple means to examine complex interactions between water user and hydrologic system behavior. However, such a model assumes a closed system and does not allow for the simulation of groundwater outflows that depend on water table elevation (e.g. baseflow, springs, wetlands), even though those outflows have value. We modify the traditional single-cell aquifer model by allowing for outflows when the water table is above certain threshold elevations. These thresholds behave similarly to holes in a bathtub, where the outflow is a positive function of the height of the water table above the threshold and the outflow is lost when the water table drops below the threshold. We find important economic consequences to this representation of the groundwater system. The economic value of services provided by threshold-dependent outflows (including non-market value), such as ecosystem services, can be incorporated. The value of services provided by these flows may warrant maintaining the water table at higher levels than would
Variable-threshold optical proximity correction (OPC) models for high-performance 0.18-μm process
NASA Astrophysics Data System (ADS)
Liao, Hongmei; Palmer, Shane R.; Sadra, Kayvan
2000-07-01
The recent development of lithographic resolution enhancement techniques of optical proximity correction (OPC) and phase shift masks (PSM) enable sprinting critical dimension (CD) features that are significantly smaller than the exposure wavelength. In this paper, we present a variable threshold OPC model that describes how a pattern configuration transfers to the wafer after resist and etch processes. This 0.18 micrometers CMOS technology utilizes isolation with pitches of active device regions below 0.5 micrometers . The effective gate length on silicon is in the range of 0.11 to 0.18 micrometers . The OPC model begins with a Hopkin's formula for aerial image calculation and is tuned to fit the measured CD data, using a commercially available software. The OPC models are anchored at a set of selected CD dat including linearity, line-end pullback, and linewidth as a function of pitch. It is found that the threshold values inferred from measured CD dat vary approximately linearly with the slope of aerial image. The accuracy of the model is illustrated by comparing the simulated contour using the OPC model and measured SEM image. The implementation of OPC models at both active and gate is achieved using two approaches: (1) to optimize the mask bias and sizes of hammerhead and serifs via a rule based approach; and (2) to correct the SRAM cell layouts by OPC model. The OPC models developed have been successfully applied to 0.18 micrometers technology in a prototyping environment.
NASA Technical Reports Server (NTRS)
Walker, B. K.; Gai, E.
1978-01-01
A method for determining time-varying Failure Detection and Identification (FDI) thresholds for single sample decision functions is described in the context of a triplex system of inertial platforms. A cost function consisting of the probability of vehicle loss due to FDI decision errors is minimized. A discrete Markov model is constructed from which this cost can be determined as a function of the decision thresholds employed to detect and identify the first and second failures. Optimal thresholds are determined through the use of parameter optimization techniques. The application of this approach to threshold determination is illustrated for the Space Shuttle's inertial measurement instruments.
NASA Astrophysics Data System (ADS)
Cain, Clarence P.; Polhamus, Garrett D.; Roach, William P.; Stolarski, David J.; Schuster, Kurt J.; Stockton, Kevin; Rockwell, Benjamin A.; Chen, Bo; Welch, Ashley J.
2006-07-01
With the advent of such systems as the airborne laser and advanced tactical laser, high-energy lasers that use 1315-nm wavelengths in the near-infrared band will soon present a new laser safety challenge to armed forces and civilian populations. Experiments in nonhuman primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular, and retinal lesions as a function of pulse duration. American National Standards Institute (ANSI) laser safety standards have traditionally been based on experimental data, and there is scant data for this wavelength. We are reporting minimum visible lesion (MVL) threshold measurements using a porcine skin model for two different pulse durations and spot sizes for this wavelength. We also compare our measurements to results from our model based on the heat transfer equation and rate process equation, together with actual temperature measurements on the skin surface using a high-speed infrared camera. Our MVL-ED50 thresholds for long pulses (350 µs) at 24-h postexposure are measured to be 99 and 83 Jcm-2 for spot sizes of 0.7 and 1.3 mm diam, respectively. Q-switched laser pulses of 50 ns have a lower threshold of 11 Jcm-2 for a 5-mm-diam top-hat laser pulse.
Li, Kai; Chen, Wenyuan; Zhang, Weiping
2011-01-01
Beam’s multiple-contact mode, characterized by multiple and discrete contact regions, non-uniform stoppers’ heights, irregular contact sequence, seesaw-like effect, indirect interaction between different stoppers, and complex coupling relationship between loads and deformation is studied. A novel analysis method and a novel high speed calculation model are developed for multiple-contact mode under mechanical load and electrostatic load, without limitations on stopper height and distribution, providing the beam has stepped or curved shape. Accurate values of deflection, contact load, contact region and so on are obtained directly, with a subsequent validation by CoventorWare. A new concept design of high-g threshold microaccelerometer based on multiple-contact mode is presented, featuring multiple acceleration thresholds of one sensitive component and consequently small sensor size. PMID:22163897
A threshold-voltage model for small-scaled GaAs nMOSFET with stacked high-k gate dielectric
NASA Astrophysics Data System (ADS)
Chaowen, Liu; Jingping, Xu; Lu, Liu; Hanhan, Lu; Yuan, Huang
2016-02-01
A threshold-voltage model for a stacked high-k gate dielectric GaAs MOSFET is established by solving a two-dimensional Poisson's equation in channel and considering the short-channel, DIBL and quantum effects. The simulated results are in good agreement with the Silvaco TCAD data, confirming the correctness and validity of the model. Using the model, impacts of structural and physical parameters of the stack high-k gate dielectric on the threshold-voltage shift and the temperature characteristics of the threshold voltage are investigated. The results show that the stacked gate dielectric structure can effectively suppress the fringing-field and DIBL effects and improve the threshold and temperature characteristics, and on the other hand, the influence of temperature on the threshold voltage is overestimated if the quantum effect is ignored. Project supported by the National Natural Science Foundation of China (No. 61176100).
Modeling on oxide dependent 2DEG sheet charge density and threshold voltage in AlGaN/GaN MOSHEMT
NASA Astrophysics Data System (ADS)
Panda, J.; Jena, K.; Swain, R.; Lenka, T. R.
2016-04-01
We have developed a physics based analytical model for the calculation of threshold voltage, two dimensional electron gas (2DEG) density and surface potential for AlGaN/GaN metal oxide semiconductor high electron mobility transistors (MOSHEMT). The developed model includes important parameters like polarization charge density at oxide/AlGaN and AlGaN/GaN interfaces, interfacial defect oxide charges and donor charges at the surface of the AlGaN barrier. The effects of two different gate oxides (Al2O3 and HfO2) are compared for the performance evaluation of the proposed MOSHEMT. The MOSHEMTs with Al2O3 dielectric have an advantage of significant increase in 2DEG up to 1.2 × 1013 cm‑2 with an increase in oxide thickness up to 10 nm as compared to HfO2 dielectric MOSHEMT. The surface potential for HfO2 based device decreases from 2 to ‑1.6 eV within 10 nm of oxide thickness whereas for the Al2O3 based device a sharp transition of surface potential occurs from 2.8 to ‑8.3 eV. The variation in oxide thickness and gate metal work function of the proposed MOSHEMT shifts the threshold voltage from negative to positive realizing the enhanced mode operation. Further to validate the model, the device is simulated in Silvaco Technology Computer Aided Design (TCAD) showing good agreement with the proposed model results. The accuracy of the developed calculations of the proposed model can be used to develop a complete physics based 2DEG sheet charge density and threshold voltage model for GaN MOSHEMT devices for performance analysis.
Klein Entink, Rinke H; Remington, Benjamin C; Blom, W Marty; Rubingh, Carina M; Kruizinga, Astrid G; Baumert, Joseph L; Taylor, Steve L; Houben, Geert F
2014-08-01
For most allergenic foods, limited availability of threshold dose information within the population restricts the advice on action levels of unintended allergenic foods which should trigger advisory labeling on packaged foods. The objective of this paper is to provide guidance for selecting an optimal sample size for threshold dosing studies for major allergenic foods and to identify factors influencing the accuracy of estimation. A simulation study was performed to evaluate the effects of sample size and dosing schemes on the accuracy of the threshold distribution curve. The relationships between sample size, dosing scheme and the employed statistical distribution on the one hand and accuracy of estimation on the other hand were obtained. It showed that the largest relative gains in accuracy are obtained when sample size increases from N=20 to N=60. Moreover, it showed that the EuroPrevall dosing scheme is a useful start, but that it may need revision for a specific allergen as more data become available, because a proper allocation of the dosing steps is important. The results may guide risk assessors in minimum sample sizes for new studies and in the allocation of proper dosing schemes for allergens in provocation studies. PMID:24815821
Deviation from threshold model in ultrafast laser ablation of graphene at sub-micron scale
Gil-Villalba, A.; Xie, C.; Salut, R.; Furfaro, L.; Giust, R.; Jacquot, M.; Lacourt, P. A.; Dudley, J. M.; Courvoisier, F.
2015-08-10
We investigate a method to measure ultrafast laser ablation threshold with respect to spot size. We use structured complex beams to generate a pattern of craters in CVD graphene with a single laser pulse. A direct comparison between beam profile and SEM characterization allows us to determine the dependence of ablation probability on spot-size, for crater diameters ranging between 700 nm and 2.5 μm. We report a drastic decrease of ablation probability when the crater diameter is below 1 μm which we interpret in terms of free-carrier diffusion.
Gallop, Robert J; Mode, Charles J; Sleeman, Candace K
2002-01-01
When comparing the performance of a stochastic model of an epidemic at two points in a parameter space, a threshold is said to have been crossed when at one point an epidemic develops with positive probability; while at the other there is a tendency for an epidemic to become extinct. The approach used to find thresholds in this paper was to embed a system of ordinary non-linear differential equations in a stochastic process, accommodating the formation and dissolution of marital partnerships in a heterosexual population, extra-marital sexual contacts, and diseases such as HIV/AIDS with stages. A symbolic representation of the Jacobian matrix of this system was derived. To determine whether this matrix was stable or non-stable at a particular parameter point, the Jacobian was evaluated at a disease-free equilibrium and its eigenvalues were computed. The stability or non-stability of the matrix was then determined by checking if all real parts of the eigenvalues were negative. By writing software to repeat this process for a selected set of points in the parameter space, it was possible to develop search engines for finding points in the parameter space where thresholds were crossed. The results of a set of Monte Carlo simulation experiments were reported which suggest that, by combining the stochastic and deterministic paradigms within a single formulation, it was possible to obtain more informative interpretations of simulation experiments than if attention were confined solely to either paradigm. PMID:11965260
Stein, C; Millan, M J; Herz, A
1988-10-01
Unilateral intraplantar injection of Freund's complete adjuvant (FCA) into one hindpaw of rats led to a localized inflammation that became apparent within 12 hours and reached its peak between 2 and 3 weeks. FCA-treated rats displayed a diminished rate of body weight gain, a reduction of food and water intake and a disruption of circadian temperature regulation, as well as decreased locomotor activity and pronounced scratching behavior in the open field. Paw pressure thresholds were reduced only in inflamed paws. Contralateral, noninflamed paws showed comparable thresholds to those of control animals. Tail-flick and tail-pressure responses were not different from controls. These data suggest that FCA-treated animals experience increased noxious input from the inflamed limb and that changes in thresholds to acutely applied nociceptive stimuli are due to a peripheral hypersensitivity of inflamed tissue. The present condition resembles most closely a state of acute inflammatory pain. The term "chronic pain" in its strict sense is not appropriate in this model. PMID:3244721
Gruber, Matthew J.; Bader, Kenneth B.; Holland, Christy K.
2014-01-01
Ultrasound contrast agents (UCAs) can be employed to nucleate cavitation to achieve desired bioeffects, such as thrombolysis, in therapeutic ultrasound applications. Effective methods of enhancing thrombolysis with ultrasound have been examined at low frequencies (<1 MHz) and low amplitudes (<0.5 MPa). The objective of this study was to determine cavitation thresholds for two UCAs exposed to 120-kHz ultrasound. A commercial ultrasound contrast agent (Definity®) and echogenic liposomes were investigated to determine the acoustic pressure threshold for ultraharmonic (UH) and broadband (BB) generation using an in vitro flow model perfused with human plasma. Cavitation emissions were detected using two passive receivers over a narrow frequency bandwidth (540–900 kHz) and a broad frequency bandwidth (0.54–1.74 MHz). UH and BB cavitation thresholds occurred at the same acoustic pressure (0.3 ± 0.1 MPa, peak to peak) and were found to depend on the sensitivity of the cavitation detector but not on the nucleating contrast agent or ultrasound duty cycle. PMID:25234874
Model selection based on FDR-thresholding optimizing the area under the ROC-curve.
Graf, Alexandra C; Bauer, Peter
2009-01-01
We evaluate variable selection by multiple tests controlling the false discovery rate (FDR) to build a linear score for prediction of clinical outcome in high-dimensional data. Quality of prediction is assessed by the receiver operating characteristic curve (ROC) for prediction in independent patients. Thus we try to combine both goals: prediction and controlled structure estimation. We show that the FDR-threshold which provides the ROC-curve with the largest area under the curve (AUC) varies largely over the different parameter constellations not known in advance. Hence, we investigated a new cross validation procedure based on the maximum rank correlation estimator to determine the optimal selection threshold. This procedure (i) allows choosing an appropriate selection criterion, (ii) provides an estimate of the FDR close to the true FDR and (iii) is simple and computationally feasible for rather moderate to small sample sizes. Low estimates of the cross validated AUC (the estimates generally being positively biased) and large estimates of the cross validated FDR may indicate a lack of sufficiently prognostic variables and/or too small sample sizes. The method is applied to an oncology dataset. PMID:19572830
Greene, Earl A.; LaMotte, Andrew E.; Cullinan, Kerri-Ann
2005-01-01
The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency?s Regional Vulnerability Assessment Program, has developed a set of statistical tools to support regional-scale, ground-water quality and vulnerability assessments. The Regional Vulnerability Assessment Program?s goals are to develop and demonstrate approaches to comprehensive, regional-scale assessments that effectively inform managers and decision-makers as to the magnitude, extent, distribution, and uncertainty of current and anticipated environmental risks. The U.S. Geological Survey is developing and exploring the use of statistical probability models to characterize the relation between ground-water quality and geographic factors in the Mid-Atlantic Region. Available water-quality data obtained from U.S. Geological Survey National Water-Quality Assessment Program studies conducted in the Mid-Atlantic Region were used in association with geographic data (land cover, geology, soils, and others) to develop logistic-regression equations that use explanatory variables to predict the presence of a selected water-quality parameter exceeding a specified management concentration threshold. The resulting logistic-regression equations were transformed to determine the probability, P(X), of a water-quality parameter exceeding a specified management threshold. Additional statistical procedures modified by the U.S. Geological Survey were used to compare the observed values to model-predicted values at each sample point. In addition, procedures to evaluate the confidence of the model predictions and estimate the uncertainty of the probability value were developed and applied. The resulting logistic-regression models were applied to the Mid-Atlantic Region to predict the spatial probability of nitrate concentrations exceeding specified management thresholds. These thresholds are usually set or established by regulators or managers at National or local levels. At management thresholds of
Sutou, Shizuyo
2015-01-01
The linear no-threshold model (LNT) was recommended in 1956, with abandonment of the traditional threshold dose-response for genetic risk assessment. Adoption of LNT by the International Commission on Radiological Protection (ICRP) became the standard for radiation regulation worldwide. The ICRP recommends a dose limit of 1 mSv/year for the public, which is too low and which terrorizes innocent people. Indeed, LNT arose mainly from the lifespan survivor study (LSS) of atomic bomb survivors. The LSS, which asserts linear dose-response and no threshold, is challenged mainly on three points. 1) Radiation doses were underestimated by half because of disregard for major residual radiation, resulting in cancer risk overestimation. 2) The dose and dose-rate effectiveness factor (DDREF) of 2 is used, but the actual DDREF is estimated as 16, resulting in cancer risk overestimation by several times. 3) Adaptive response (hormesis) is observed in leukemia and solid cancer cases, consistently contradicting the linearity of LNT. Drastic reduction of cancer risk moves the dose-response curve close to the control line, allowing the setting of a threshold. Living organisms have been evolving for 3.8 billion years under radiation exposure, naturally acquiring various defense mechanisms such as DNA repair mechanisms, apoptosis, and immune response. The failure of LNT lies in the neglect of carcinogenesis and these biological mechanisms. Obstinate application of LNT continues to cause tremendous human, social, and economic losses. The 60-year-old LNT must be rejected to establish a new scientific knowledge-based system. PMID:26521869
Endometrial cancer and antidepressants: A nationwide population-based study.
Lin, Chiao-Fan; Chan, Hsiang-Lin; Hsieh, Yi-Hsuan; Liang, Hsin-Yi; Chiu, Wei-Che; Huang, Kuo-You; Lee, Yena; McIntyre, Roger S; Chen, Vincent Chin-Hung
2016-07-01
To our knowledge, the association between antidepressant exposure and endometrial cancer has not been previously explored. Herein, we aim to investigate the association between antidepressant prescription, including novel antidepressants, and the risk for endometrial cancer in a population-based study.Data for the analysis were derived from National Health Insurance Research Database. We identified 8392 cases with a diagnosis of endometrial cancer and 82,432 matched controls. A conditional logistic regression model was used, with adjusting for potentially confounding variables (e.g., comorbid psychiatric diseases, comorbid physical diseases, and other medications). Risk for endometrial cancer in the population-based study sample was categorized by, and assessed as a function of, antidepressant prescription and cumulative dosage.We report no association between endometrial cancer incidence and antidepressant prescription, including those prescribed either selective serotonin reuptake inhibitors (adjusted odds ratio [OR] = 0.98; 95% confidence interval [CI], 0.84-1.15) or serotonin norepinephrine reuptake inhibitors (adjusted OR = 1.14; 95% CI, 0.76-1.71). We also did not identify an association between higher cumulative doses of antidepressant prescription and endometrial cancer.There was no association between antidepressant prescription and endometrial cancer. PMID:27442640
Applications of threshold models and the weighted bootstrap for Hungarian precipitation data
NASA Astrophysics Data System (ADS)
Varga, László; Rakonczai, Pál; Zempléni, András
2016-05-01
This paper presents applications of the peaks-over-threshold methodology for both the univariate and the recently introduced bivariate case, combined with a novel bootstrap approach. We compare the proposed bootstrap methods to the more traditional profile likelihood. We have investigated 63 years of the European Climate Assessment daily precipitation data for five Hungarian grid points, first separately for the summer and winter months, then aiming at the detection of possible changes by investigating 20 years moving windows. We show that significant changes can be observed both in the univariate and the bivariate cases, the most recent period being the most dangerous in several cases, as some return values have increased substantially. We illustrate these effects by bivariate coverage regions.
Wang, Yang; Weng, George J.; Meguid, Shaker A.; Hamouda, Abdel Magid
2014-05-21
A continuum model that possesses several desirable features of the electrical conduction process in carbon-nanotube (CNT) based nanocomposites is developed. Three basic elements are included: (i) percolation threshold, (ii) interface effects, and (iii) tunneling-assisted interfacial conductivity. We approach the first one through the selection of an effective medium theory. We approach the second one by the introduction of a diminishing layer of interface with an interfacial conductivity to build a 'thinly coated' CNT. The third one is introduced through the observation that interface conductivity can be enhanced by electron tunneling which in turn can be facilitated with the formation of CNT networks. We treat this last issue in a continuum fashion by taking the network formation as a statistical process that can be represented by Cauchy's probability density function. The outcome is a simple and yet widely useful model that can simultaneously capture all these fundamental characteristics. It is demonstrated that, without considering the interface effect, the predicted conductivity would be too high, and that, without accounting for the additional contribution from the tunneling-assisted interfacial conductivity, the predicted conductivity beyond the percolation threshold would be too low. It is with the consideration of all three elements that the theory can fully account for the experimentally measured data. We further use the developed model to demonstrate that, despite the anisotropy of the intrinsic CNT conductivity, it is its axial component along the CNT direction that dominates the overall conductivity. This theory is also proved that, even with a totally insulating matrix, it is still capable of delivering non-zero conductivity beyond the percolation threshold.
NASA Astrophysics Data System (ADS)
Galant, Grzegorz; Zaleśny, Jarosław; Lisak, Mietek; Berczyński, Paweł; Berczyński, Stefan
2011-05-01
An analytical model that is based on purely differential equations of the nonlinear dynamics of two plasma modes driven resonantly by high-energy ions near the instability threshold is presented here. The well-known integro-differential model of Berk and Breizman (BB) extended to the case of two plasma modes is simplified here to a system of two coupled nonlinear differential equations of fifth order. The effects of the Krook, diffusion and dynamical friction (drag) relaxation processes are considered, whereas shifts in frequency and wavenumber between the modes are neglected. In spite of these simplifications the main features of the dynamics of the two plasma modes are retained. The numerical solutions to the model equations show competition between the two modes for survival, oscillations, chaotic regimes and 'blow-up' behavior, similar to the BB model.
NASA Astrophysics Data System (ADS)
Anisimov, Oleg; Kokorev, Vasiliy; Reneva, Svetlana; Shiklomanov, Nikolai
2010-05-01
Numerous efforts have been made to access the environmental impacts of changing climate in permafrost regions using mathematical models. Despite the significant improvements in representation of individual sub-systems, such as permafrost, vegetation, snow and hydrology, even the most comprehensive models do not replicate the coupled non-linear interactions between them that lead to threshold-driven changes. Observations indicate that ecosystems may change dramatically, rapidly, and often irreversibly, reaching fundamentally different state once they pass a critical threshold. The key to understanding permafrost threshold phenomena is interaction with other environmental factors that are very likely to change in response to climate warming. One of such factors is vegetation. Vegetation control over the thermal state of underlying ground is two-fold. Firstly, canopies have different albedo that affects the radiation balance at the soil surface. Secondly, depending on biome composition vegetation canopy may have different thermal conductivity that governs the heat fluxes between soil and atmosphere. There are clear indications based on ground observations and remote sensing that vegetation has already been changed in response to climatic warming, in consensus with the results of manipulations at experimental plots that involve artificial warming and CO2 fertilization. Under sustained warming lower vegetation (mosses, lichens) is gradually replaced by shrubs. Mosses have high thermal insolating effect in summer, which is why their retreat enhances permafrost warming. Taller shrubs accumulate snow that further warms permafrost in winter. Permafrost remains unchanged as long as responding vegetation intercepts and mitigates the climate change signal. Beyond certain threshold enhanced abundance and growth of taller vegetation leads to abrupt permafrost changes. Changes in hydrology, i.e. soil wetting or drying, may have similar effect on permafrost. Wetting increases soil
Zhou, Yi-Biao; Chen, Yue; Liang, Song; Song, Xiu-Xia; Chen, Geng-Xin; He, Zhong; Cai, Bin; Yihuo, Wu-Li; He, Zong-Gui; Jiang, Qing-Wu
2016-01-01
Schistosomiasis remains a serious public health issue in many tropical countries, with more than 700 million people at risk of infection. In China, a national integrated control strategy, aiming at blocking its transmission, has been carried out throughout endemic areas since 2005. A longitudinal study was conducted to determine the effects of different intervention measures on the transmission dynamics of S. japonicum in three study areas and the data were analyzed using a multi-host model. The multi-host model was also used to estimate the threshold of Oncomelania snail density for interrupting schistosomiasis transmission based on the longitudinal data as well as data from the national surveillance system for schistosomiasis. The data showed a continuous decline in the risk of human infection and the multi-host model fit the data well. The 25th, 50th and 75th percentiles, and the mean of estimated thresholds of Oncomelania snail density below which the schistosomiasis transmission cannot be sustained were 0.006, 0.009, 0.028 and 0.020 snails/0.11 m(2), respectively. The study results could help develop specific strategies of schistosomiasis control and elimination tailored to the local situation for each endemic area. PMID:27535177
Zhou, Yi-Biao; Chen, Yue; Liang, Song; Song, Xiu-Xia; Chen, Geng-Xin; He, Zhong; Cai, Bin; Yihuo, Wu-Li; He, Zong-Gui; Jiang, Qing-Wu
2016-01-01
Schistosomiasis remains a serious public health issue in many tropical countries, with more than 700 million people at risk of infection. In China, a national integrated control strategy, aiming at blocking its transmission, has been carried out throughout endemic areas since 2005. A longitudinal study was conducted to determine the effects of different intervention measures on the transmission dynamics of S. japonicum in three study areas and the data were analyzed using a multi-host model. The multi-host model was also used to estimate the threshold of Oncomelania snail density for interrupting schistosomiasis transmission based on the longitudinal data as well as data from the national surveillance system for schistosomiasis. The data showed a continuous decline in the risk of human infection and the multi-host model fit the data well. The 25th, 50th and 75th percentiles, and the mean of estimated thresholds of Oncomelania snail density below which the schistosomiasis transmission cannot be sustained were 0.006, 0.009, 0.028 and 0.020 snails/0.11 m2, respectively. The study results could help develop specific strategies of schistosomiasis control and elimination tailored to the local situation for each endemic area. PMID:27535177
Chua, Yansong; Morrison, Abigail; Helias, Moritz
2015-01-01
Modeling the layer 5 pyramidal neuron as a system of three connected isopotential compartments, the soma, proximal, and distal compartment, with calcium spike dynamics in the distal compartment following first order kinetics, we are able to reproduce in-vitro experimental results which demonstrate the involvement of calcium spikes in action potentials generation. To explore how calcium spikes affect the neuronal output in-vivo, we emulate in-vivo like conditions by embedding the neuron model in a regime of low background fluctuations with occasional large synchronous inputs. In such a regime, a full calcium spike is only triggered by the synchronous events in a threshold like manner and has a stereotypical waveform. Hence, in such a regime, we are able to replace the calcium dynamics with a simpler threshold triggered current of fixed waveform, which is amenable to analytical treatment. We obtain analytically the mean somatic membrane potential excursion due to a calcium spike being triggered while in the fluctuating regime. Our analytical form that accounts for the covariance between conductances and the membrane potential shows a better agreement with simulation results than a naive first order approximation. PMID:26283954
The future of population-based postmarket drug risk assessment: a regulator's perspective.
Hammad, T A; Neyarapally, G A; Iyasu, S; Staffa, J A; Dal Pan, G
2013-09-01
The US Food and Drug Administration emphasizes the role of regulatory science in the fulfillment of its mission to promote and protect public health and foster innovation. With respect to the evaluation of drug effects in the real world, regulatory science plays an important role in drug risk assessment and management. This article discusses opportunities and challenges with population-based drug risk assessment as well as related regulatory science knowledge gaps in the following areas: (i) population-based data sources and methods to evaluate drug safety issues; (ii) evidence-based thresholds to account for uncertainty in postmarket data; (iii) approaches to optimize the integration and interpretation of evidence from different sources; and (iv) approaches to evaluate the real-world impact of regulatory decisions. Regulators should continue the ongoing dialogue with multiple stakeholders to strengthen regulatory safety science and address these and other critical knowledge gaps. PMID:23739537
Population-based absolute risk estimation with survey data.
Kovalchik, Stephanie A; Pfeiffer, Ruth M
2014-04-01
Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614
NASA Astrophysics Data System (ADS)
Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.
2015-04-01
Current approaches for building quantum computing devices focus on two-level quantum systems which nicely mimic the concept of a classical bit, albeit enhanced with additional quantum properties. However, rather than artificially limiting the number of states to two, the use of d -level quantum systems (qudits) could provide advantages for quantum information processing. Among other merits, it has recently been shown that multilevel quantum systems can offer increased stability to external disturbances. In this study we demonstrate that topological quantum memories built from qudits, also known as Abelian quantum double models, exhibit a substantially increased resilience to noise. That is, even when taking into account the multitude of errors possible for multilevel quantum systems, topological quantum error-correction codes employing qudits can sustain a larger error rate than their two-level counterparts. In particular, we find strong numerical evidence that the thresholds of these error-correction codes are given by the hashing bound. Considering the significantly increased error thresholds attained, this might well outweigh the added complexity of engineering and controlling higher-dimensional quantum systems.