Sample records for negative exponential model

  1. A new cellular automata model of traffic flow with negative exponential weighted look-ahead potential

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye

    2016-10-01

    With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).

  2. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  3. Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation

    DTIC Science & Technology

    1990-05-01

    process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE

  4. Adult Age Differences and the Role of Cognitive Resources in Perceptual–Motor Skill Acquisition: Application of a Multilevel Negative Exponential Model

    PubMed Central

    Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali

    2010-01-01

    The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985

  5. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  6. A demographic study of the exponential distribution applied to uneven-aged forests

    Treesearch

    Jeffrey H. Gove

    2016-01-01

    A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...

  7. Bidirectional-Compounding Effects of Rumination and Negative Emotion in Predicting Impulsive Behavior: Implications for Emotional Cascades.

    PubMed

    Selby, Edward A; Kranzler, Amy; Panza, Emily; Fehling, Kara B

    2016-04-01

    Influenced by chaos theory, the emotional cascade model proposes that rumination and negative emotion may promote each other in a self-amplifying cycle that increases over time. Accordingly, exponential-compounding effects may better describe the relationship between rumination and negative emotion when they occur in impulsive persons, and predict impulsive behavior. Forty-seven community and undergraduate participants who reported frequent engagement in impulsive behaviors monitored their ruminative thoughts and negative emotion multiple times daily for two weeks using digital recording devices. Hypotheses were tested using cross-lagged mixed model analyses. Findings indicated that rumination predicted subsequent elevations in rumination that lasted over extended periods of time. Rumination and negative emotion predicted increased levels of each other at subsequent assessments, and exponential functions for these associations were supported. Results also supported a synergistic effect between rumination and negative emotion, predicting larger elevations in subsequent rumination and negative emotion than when one variable alone was elevated. Finally, there were synergistic effects of rumination and negative emotion in predicting number of impulsive behaviors subsequently reported. These findings are consistent with the emotional cascade model in suggesting that momentary rumination and negative emotion progressively propagate and magnify each other over time in impulsive people, promoting impulsive behavior. © 2014 Wiley Periodicals, Inc.

  8. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  9. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  10. Flow of 3D Eyring-Powell fluid by utilizing Cattaneo-Christov heat flux model and chemical processes over an exponentially stretching surface

    NASA Astrophysics Data System (ADS)

    Hayat, Tanzila; Nadeem, S.

    2018-03-01

    This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.

  11. Exponential Models of Legislative Turnover. [and] The Dynamics of Political Mobilization, I: A Model of the Mobilization Process, II: Deductive Consequences and Empirical Application of the Model. Applications of Calculus to American Politics. [and] Public Support for Presidents. Applications of Algebra to American Politics. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 296-300.

    ERIC Educational Resources Information Center

    Casstevens, Thomas W.; And Others

    This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…

  12. Human population and atmospheric carbon dioxide growth dynamics: Diagnostics for the future

    NASA Astrophysics Data System (ADS)

    Hüsler, A. D.; Sornette, D.

    2014-10-01

    We analyze the growth rates of human population and of atmospheric carbon dioxide by comparing the relative merits of two benchmark models, the exponential law and the finite-time-singular (FTS) power law. The later results from positive feedbacks, either direct or mediated by other dynamical variables, as shown in our presentation of a simple endogenous macroeconomic dynamical growth model describing the growth dynamics of coupled processes involving human population (labor in economic terms), capital and technology (proxies by CO2 emissions). Human population in the context of our energy intensive economies constitutes arguably the most important underlying driving variable of the content of carbon dioxide in the atmosphere. Using some of the best databases available, we perform empirical analyses confirming that the human population on Earth has been growing super-exponentially until the mid-1960s, followed by a decelerated sub-exponential growth, with a tendency to plateau at just an exponential growth in the last decade with an average growth rate of 1.0% per year. In contrast, we find that the content of carbon dioxide in the atmosphere has continued to accelerate super-exponentially until 1990, with a transition to a progressive deceleration since then, with an average growth rate of approximately 2% per year in the last decade. To go back to CO2 atmosphere contents equal to or smaller than the level of 1990 as has been the broadly advertised goals of international treaties since 1990 requires herculean changes: from a dynamical point of view, the approximately exponential growth must not only turn to negative acceleration but also negative velocity to reverse the trend.

  13. The Application of Various Nonlinear Models to Describe Academic Growth Trajectories: An Empirical Analysis Using Four-Wave Longitudinal Achievement Data from a Large Urban School District

    ERIC Educational Resources Information Center

    Shin, Tacksoo

    2012-01-01

    This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…

  14. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  15. Intravoxel water diffusion heterogeneity MR imaging of nasopharyngeal carcinoma using stretched exponential diffusion model.

    PubMed

    Lai, Vincent; Lee, Victor Ho Fun; Lam, Ka On; Sze, Henry Chun Kin; Chan, Queenie; Khong, Pek Lan

    2015-06-01

    To determine the utility of stretched exponential diffusion model in characterisation of the water diffusion heterogeneity in different tumour stages of nasopharyngeal carcinoma (NPC). Fifty patients with newly diagnosed NPC were prospectively recruited. Diffusion-weighted MR imaging was performed using five b values (0-2,500 s/mm(2)). Respective stretched exponential parameters (DDC, distributed diffusion coefficient; and alpha (α), water heterogeneity) were calculated. Patients were stratified into low and high tumour stage groups based on the American Joint Committee on Cancer (AJCC) staging for determination of the predictive powers of DDC and α using t test and ROC curve analyses. The mean ± standard deviation values were DDC = 0.692 ± 0.199 (×10(-3) mm(2)/s) for low stage group vs 0.794 ± 0.253 (×10(-3) mm(2)/s) for high stage group; α = 0.792 ± 0.145 for low stage group vs 0.698 ± 0.155 for high stage group. α was significantly lower in the high stage group while DDC was negatively correlated. DDC and α were both reliable independent predictors (p < 0.001), with α being more powerful. Optimal cut-off values were (sensitivity, specificity, positive likelihood ratio, negative likelihood ratio) DDC = 0.692 × 10(-3) mm(2)/s (94.4 %, 64.3 %, 2.64, 0.09), α = 0.720 (72.2 %, 100 %, -, 0.28). The heterogeneity index α is robust and can potentially help in staging and grading prediction in NPC. • Stretched exponential diffusion models can help in tissue characterisation in nasopharyngeal carcinoma • α and distributed diffusion coefficient (DDC) are negatively correlated • α is a robust heterogeneity index marker • α can potentially help in staging and grading prediction.

  16. Numerical renormalization group method for entanglement negativity at finite temperature

    NASA Astrophysics Data System (ADS)

    Shim, Jeongmin; Sim, H.-S.; Lee, Seung-Sup B.

    2018-04-01

    We develop a numerical method to compute the negativity, an entanglement measure for mixed states, between the impurity and the bath in quantum impurity systems at finite temperature. We construct a thermal density matrix by using the numerical renormalization group (NRG), and evaluate the negativity by implementing the NRG approximation that reduces computational cost exponentially. We apply the method to the single-impurity Kondo model and the single-impurity Anderson model. In the Kondo model, the negativity exhibits a power-law scaling at temperature much lower than the Kondo temperature and a sudden death at high temperature. In the Anderson model, the charge fluctuation of the impurity contributes to the negativity even at zero temperature when the on-site Coulomb repulsion of the impurity is finite, while at low temperature the negativity between the impurity spin and the bath exhibits the same power-law scaling behavior as in the Kondo model.

  17. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  18. Econometrics of exhaustible resource supply: a theory and an application. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epple, D.; Hansen, L.P.

    1981-12-01

    An econometric model of US oil and natural gas discoveries is developed in this study. The econometric model is explicitly derived as the solution to the problem of maximizing the expected discounted after tax present value of revenues net of exploration, development, and production costs. The model contains equations representing producers' formation of price expectations and separate equations giving producers' optimal exploration decisions contingent on expected prices. A procedure is developed for imposing resource base constraints (e.g., ultimate recovery estimates based on geological analysis) when estimating the econometric model. The model is estimated using aggregate post-war data for the Unitedmore » States. Production from a given addition to proved reserves is assumed to follow a negative exponential path, and additions of proved reserves from a given discovery are assumed to follow a negative exponential path. Annual discoveries of oil and natural gas are estimated as latent variables. These latent variables are the endogenous variables in the econometric model of oil and natural gas discoveries. The model is estimated without resource base constraints. The model is also estimated imposing the mean oil and natural gas ultimate recovery estimates of the US Geological Survey. Simulations through the year 2020 are reported for various future price regimes.« less

  19. Using Exponential Random Graph Models to Analyze the Character of Peer Relationship Networks and Their Effects on the Subjective Well-being of Adolescents.

    PubMed

    Jiao, Can; Wang, Ting; Liu, Jianxin; Wu, Huanjie; Cui, Fang; Peng, Xiaozhe

    2017-01-01

    The influences of peer relationships on adolescent subjective well-being were investigated within the framework of social network analysis, using exponential random graph models as a methodological tool. The participants in the study were 1,279 students (678 boys and 601 girls) from nine junior middle schools in Shenzhen, China. The initial stage of the research used a peer nomination questionnaire and a subjective well-being scale (used in previous studies) to collect data on the peer relationship networks and the subjective well-being of the students. Exponential random graph models were then used to explore the relationships between students with the aim of clarifying the character of the peer relationship networks and the influence of peer relationships on subjective well being. The results showed that all the adolescent peer relationship networks in our investigation had positive reciprocal effects, positive transitivity effects and negative expansiveness effects. However, none of the relationship networks had obvious receiver effects or leaders. The adolescents in partial peer relationship networks presented similar levels of subjective well-being on three dimensions (satisfaction with life, positive affects and negative affects) though not all network friends presented these similarities. The study shows that peer networks can affect an individual's subjective well-being. However, whether similarities among adolescents are the result of social influences or social choices needs further exploration, including longitudinal studies that investigate the potential processes of subjective well-being similarities among adolescents.

  20. Using Exponential Random Graph Models to Analyze the Character of Peer Relationship Networks and Their Effects on the Subjective Well-being of Adolescents

    PubMed Central

    Jiao, Can; Wang, Ting; Liu, Jianxin; Wu, Huanjie; Cui, Fang; Peng, Xiaozhe

    2017-01-01

    The influences of peer relationships on adolescent subjective well-being were investigated within the framework of social network analysis, using exponential random graph models as a methodological tool. The participants in the study were 1,279 students (678 boys and 601 girls) from nine junior middle schools in Shenzhen, China. The initial stage of the research used a peer nomination questionnaire and a subjective well-being scale (used in previous studies) to collect data on the peer relationship networks and the subjective well-being of the students. Exponential random graph models were then used to explore the relationships between students with the aim of clarifying the character of the peer relationship networks and the influence of peer relationships on subjective well being. The results showed that all the adolescent peer relationship networks in our investigation had positive reciprocal effects, positive transitivity effects and negative expansiveness effects. However, none of the relationship networks had obvious receiver effects or leaders. The adolescents in partial peer relationship networks presented similar levels of subjective well-being on three dimensions (satisfaction with life, positive affects and negative affects) though not all network friends presented these similarities. The study shows that peer networks can affect an individual’s subjective well-being. However, whether similarities among adolescents are the result of social influences or social choices needs further exploration, including longitudinal studies that investigate the potential processes of subjective well-being similarities among adolescents. PMID:28450845

  1. A Hybrid Algorithm for Non-negative Matrix Factorization Based on Symmetric Information Divergence

    PubMed Central

    Devarajan, Karthik; Ebrahimi, Nader; Soofi, Ehsan

    2017-01-01

    The objective of this paper is to provide a hybrid algorithm for non-negative matrix factorization based on a symmetric version of Kullback-Leibler divergence, known as intrinsic information. The convergence of the proposed algorithm is shown for several members of the exponential family such as the Gaussian, Poisson, gamma and inverse Gaussian models. The speed of this algorithm is examined and its usefulness is illustrated through some applied problems. PMID:28868206

  2. Radar correlated imaging for extended target by the combination of negative exponential restraint and total variation

    NASA Astrophysics Data System (ADS)

    Qian, Tingting; Wang, Lianlian; Lu, Guanghua

    2017-07-01

    Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.

  3. Erosion over time on severely disturbed granitic soils: a model

    Treesearch

    W. F. Megahan

    1974-01-01

    A negative exponential equation containing three parameters was derived to describe time trends in surface erosion on severely disturbed soils. Data from four different studies of surface erosion on roads constructed from the granitic materials found in the Idaho Batholith were used to develop equation parameters. The evidence suggests that surface "armoring...

  4. Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users

    PubMed Central

    Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.

    2016-01-01

    Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347

  5. Comparing exponential and exponentiated models of drug demand in cocaine users.

    PubMed

    Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W

    2016-12-01

    Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  7. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346

  8. 2D motility tracking of Pseudomonas putida KT2440 in growth phases using video microscopy

    PubMed Central

    Davis, Michael L.; Mounteer, Leslie C.; Stevens, Lindsey K.; Miller, Charles D.; Zhou, Anhong

    2011-01-01

    Pseudomonas putida KT2440 is a gram negative motile soil bacterium important in bioremediation and biotechnology. Thus, it is important to understand its motility characteristics as individuals and in populations. Population characteristics were determined using a modified Gompertz model. Video microscopy and imaging software were utilized to analyze two dimensional (2D) bacteria movement tracks to quantify individual bacteria behavior. It was determined that inoculum density increased the lag time as seeding densities decreased, and that the maximum specific growth rate decreased as seeding densities increased. Average bacterial velocity remained relatively similar throughout exponential growth phase (~20.9 µm/sec), while maximum velocities peak early in exponential growth phase at a velocity of 51.2 µm/sec. Pseudomonas putida KT2440 also favor smaller turn angles indicating they often continue in the same direction after a change in flagella rotation throughout the exponential growth phase. PMID:21334971

  9. Strong feedback limit of the Goodwin circadian oscillator

    NASA Astrophysics Data System (ADS)

    Woller, Aurore; Gonze, Didier; Erneux, Thomas

    2013-03-01

    The three-variable Goodwin model constitutes a prototypical oscillator based on a negative feedback loop. It was used as a minimal model for circadian oscillations. Other core models for circadian clocks are variants of the Goodwin model. The Goodwin oscillator also appears in many studies of coupled oscillator networks because of its relative simplicity compared to other biophysical models involving a large number of variables and parameters. Because the synchronization properties of Goodwin oscillators still remain difficult to explore mathematically, further simplifications of the Goodwin model have been sought. In this paper, we investigate the strong negative feedback limit of Goodwin equations by using asymptotic techniques. We find that Goodwin oscillations approach a sequence of decaying exponentials that can be described in terms of a single-variable leaky integrated-and-fire model.

  10. A quasi-likelihood approach to non-negative matrix factorization

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511

  11. A new probability distribution model of turbulent irradiance based on Born perturbation theory

    NASA Astrophysics Data System (ADS)

    Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo

    2010-10-01

    The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.

  12. Relationship between Item Responses of Negative Affect Items and the Distribution of the Sum of the Item Scores in the General Population

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Several studies have shown that total depressive symptom scores in the general population approximate an exponential pattern, except for the lower end of the distribution. The Center for Epidemiologic Studies Depression Scale (CES-D) consists of 20 items, each of which may take on four scores: “rarely,” “some,” “occasionally,” and “most of the time.” Recently, we reported that the item responses for 16 negative affect items commonly exhibit exponential patterns, except for the level of “rarely,” leading us to hypothesize that the item responses at the level of “rarely” may be related to the non-exponential pattern typical of the lower end of the distribution. To verify this hypothesis, we investigated how the item responses contribute to the distribution of the sum of the item scores. Methods Data collected from 21,040 subjects who had completed the CES-D questionnaire as part of a Japanese national survey were analyzed. To assess the item responses of negative affect items, we used a parameter r, which denotes the ratio of “rarely” to “some” in each item response. The distributions of the sum of negative affect items in various combinations were analyzed using log-normal scales and curve fitting. Results The sum of the item scores approximated an exponential pattern regardless of the combination of items, whereas, at the lower end of the distributions, there was a clear divergence between the actual data and the predicted exponential pattern. At the lower end of the distributions, the sum of the item scores with high values of r exhibited higher scores compared to those predicted from the exponential pattern, whereas the sum of the item scores with low values of r exhibited lower scores compared to those predicted. Conclusions The distributional pattern of the sum of the item scores could be predicted from the item responses of such items. PMID:27806132

  13. Macromolecular Rate Theory (MMRT) Provides a Thermodynamics Rationale to Underpin the Convergent Temperature Response in Plant Leaf Respiration

    NASA Astrophysics Data System (ADS)

    Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.

    2017-12-01

    Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.

  14. Galileon bounce after ekpyrotic contraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osipov, M.; Rubakov, V., E-mail: osipov@ms2.inr.ac.ru, E-mail: rubakov@ms2.inr.ac.ru

    We consider a simple cosmological model that includes a long ekpyrotic contraction stage and smooth bounce after it. Ekpyrotic behavior is due to a scalar field with a negative exponential potential, whereas the Galileon field produces bounce. We give an analytical picture of how the bounce occurs within the weak gravity regime, and then perform numerical analysis to extend our results to a non-perturbative regime.

  15. How rapidly does the excess risk of lung cancer decline following quitting smoking? A quantitative review using the negative exponential model.

    PubMed

    Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J

    2013-10-01

    The excess lung cancer risk from smoking declines with time quit, but the shape of the decline has never been precisely modelled, or meta-analyzed. From a database of studies of at least 100 cases, we extracted 106 blocks of RRs (from 85 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls (or at-risk) formed the data for fitting the negative exponential model. We estimated the half-life (H, time in years when the excess risk becomes half that for a continuing smoker) for each block, investigated model fit, and studied heterogeneity in H. We also conducted sensitivity analyses allowing for reverse causation, either ignoring short-term quitters (S1) or considering them smokers (S2). Model fit was poor ignoring reverse causation, but much improved for both sensitivity analyses. Estimates of H were similar for all three analyses. For the best-fitting analysis (S1), H was 9.93 (95% CI 9.31-10.60), but varied by sex (females 7.92, males 10.71), and age (<50years 6.98, 70+years 12.99). Given that reverse causation is taken account of, the model adequately describes the decline in excess risk. However, estimates of H may be biased by factors including misclassification of smoking status. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  16. 2D motility tracking of Pseudomonas putida KT2440 in growth phases using video microscopy.

    PubMed

    Davis, Michael L; Mounteer, Leslie C; Stevens, Lindsey K; Miller, Charles D; Zhou, Anhong

    2011-05-01

    Pseudomonas putida KT2440 is a gram negative motile soil bacterium important in bioremediation and biotechnology. Thus, it is important to understand its motility characteristics as individuals and in populations. Population characteristics were determined using a modified Gompertz model. Video microscopy and imaging software were utilized to analyze two dimensional (2D) bacteria movement tracks to quantify individual bacteria behavior. It was determined that inoculum density increased the lag time as seeding densities decreased, and that the maximum specific growth rate decreased as seeding densities increased. Average bacterial velocity remained relatively similar throughout the exponential growth phase (~20.9 μm/s), while maximum velocities peak early in the exponential growth phase at a velocity of 51.2 μm/s. P. putida KT2440 also favors smaller turn angles indicating that they often continue in the same direction after a change in flagella rotation throughout the exponential growth phase. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  17. Plasmids as stochastic model systems

    NASA Astrophysics Data System (ADS)

    Paulsson, Johan

    2003-05-01

    Plasmids are self-replicating gene clusters present in on average 2-100 copies per bacterial cell. To reduce random fluctuations and thereby avoid extinction, they ubiquitously autoregulate their own synthesis using negative feedback loops. Here I use van Kampen's Ω-expansion for a two-dimensional model of negative feedback including plasmids and ther replication inhibitors. This analytically summarizes the standard perspective on replication control -- including the effects of sensitivity amplification, exponential time-delays and noisy signaling. I further review the two most common molecular sensitivity mechanisms: multistep control and cooperativity. Finally, I discuss more controversial sensitivity schemes, such as noise-enhanced sensitivity, the exploitation of small-number combinatorics and double-layered feedback loops to suppress noise in disordered environments.

  18. NiTi Alloy Negator Springs for Long-Stroke Constant-Force Shape Memory Actuators: Modeling, Simulation and Testing

    NASA Astrophysics Data System (ADS)

    Spaggiari, Andrea; Dragoni, Eugenio; Tuissi, Ausonio

    2014-07-01

    This work aims at the experimental characterization and modeling validation of shape memory alloy (SMA) Negator springs. According to the classic engineering books on springs, a Negator spring is a spiral spring made of strip of metal wound on the flat with an inherent curvature such that, in repose, each coil wraps tightly on its inner neighbor. The main feature of a Negator springs is the nearly constant force displacement behavior in the unwinding of the strip. Moreover the stroke is very long, theoretically infinite, as it depends only on the length of the initial strip. A Negator spring made in SMA is built and experimentally tested to demonstrate the feasibility of this actuator. The shape memory Negator spring behavior can be modeled with an analytical procedure, which is in good agreement with the experimental test and can be used for design purposes. In both cases, the material is modeled as elastic in austenitic range, while an exponential continuum law is used to describe the martensitic behavior. The experimental results confirms the applicability of this kind of geometry to the shape memory alloy actuators, and the analytical model is confirmed to be a powerful design tool to dimension and predict the spring behavior both in martensitic and austenitic range.

  19. Estimating the decline in excess risk of chronic obstructive pulmonary disease following quitting smoking - a systematic review based on the negative exponential model.

    PubMed

    Lee, Peter N; Fry, John S; Forey, Barbara A

    2014-03-01

    We quantified the decline in COPD risk following quitting using the negative exponential model, as previously carried out for other smoking-related diseases. We identified 14 blocks of RRs (from 11 studies) comparing current smokers, former smokers (by time quit) and never smokers, some studies providing sex-specific blocks. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We estimated the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block, except for one where no decline with quitting was evident, and H was not estimable. For the remaining 13 blocks, goodness-of-fit to the model was generally adequate, the combined estimate of H being 13.32 (95% CI 11.86-14.96) years. There was no heterogeneity in H, overall or by various studied sources. Sensitivity analyses allowing for reverse causation or different assumed times for the final quitting period little affected the results. The model summarizes quitting data well. The estimate of 13.32years is substantially larger than recent estimates of 4.40years for ischaemic heart disease and 4.78years for stroke, and also larger than the 9.93years for lung cancer. Heterogeneity was unimportant for COPD, unlike for the other three diseases. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Estimating the decline in excess risk of cerebrovascular disease following quitting smoking--a systematic review based on the negative exponential model.

    PubMed

    Lee, Peter N; Fry, John S; Thornton, Alison J

    2014-02-01

    We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  2. The Negative Sign and Exponential Expressions: Unveiling Students' Persistent Errors and Misconceptions

    ERIC Educational Resources Information Center

    Cangelosi, Richard; Madrid, Silvia; Cooper, Sandra; Olson, Jo; Hartter, Beverly

    2013-01-01

    The purpose of this study was to determine whether or not certain errors made when simplifying exponential expressions persist as students progress through their mathematical studies. College students enrolled in college algebra, pre-calculus, and first- and second-semester calculus mathematics courses were asked to simplify exponential…

  3. Using the negative exponential distribution to quantitatively review the evidence on how rapidly the excess risk of ischaemic heart disease declines following quitting smoking.

    PubMed

    Lee, Peter N; Fry, John S; Hamling, Jan S

    2012-10-01

    No previous review has formally modelled the decline in IHD risk following quitting smoking. From PubMed searches and other sources we identified 15 prospective and eight case-control studies that compared IHD risk in current smokers, never smokers, and quitters by time period of quit, some studies providing separate blocks of results by sex, age or amount smoked. For each of 41 independent blocks, we estimated, using the negative exponential model, the time, H, when the excess risk reduced to half that caused by smoking. Goodness-of-fit to the model was adequate for 35 blocks, others showing a non-monotonic pattern of decline following quitting, with a variable pattern of misfit. After omitting one block with a current smoker RR 1.0, the combined H estimate was 4.40 (95% CI 3.26-5.95) years. There was considerable heterogeneity, H being <2years for 10 blocks and >10years for 12. H increased (p<0.001) with mean age at study start, but not clearly with other factors. Sensitivity analyses allowing for reverse causation, or varying assumed midpoint times for the final open-ended quitting period little affected goodness-of-fit of the combined estimate. The US Surgeon-General's view that excess risk approximately halves after a year's abstinence seems over-optimistic. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Tl+-induced micros gating of current indicates instability of the MaxiK selectivity filter as caused by ion/pore interaction.

    PubMed

    Schroeder, Indra; Hansen, Ulf-Peter

    2008-04-01

    Patch clamp experiments on single MaxiK channels expressed in HEK293 cells were performed at high temporal resolution (50-kHz filter) in asymmetrical solutions containing 0, 25, 50, or 150 mM Tl+ on the luminal or cytosolic side with [K+] + [Tl+] = 150 mM and 150 mM K+ on the other side. Outward current in the presence of cytosolic Tl+ did not show fast gating behavior that was significantly different from that in the absence of Tl+. With luminal Tl+ and at membrane potentials more negative than -40 mV, the single-channel current showed a negative slope resistance concomitantly with a flickery block, resulting in an artificially reduced apparent single-channel current I(app). The analysis of the amplitude histograms by beta distributions enabled the estimation of the true single-channel current and the determination of the rate constants of a simple two-state O-C Markov model for the gating in the bursts. The voltage dependence of the gating ratio R = I(true)/I(app) = (k(CO) + k(OC))/k(CO) could be described by exponential functions with different characteristic voltages above or below 50 mM Tl(+). The true single-channel current I(true) decreased with Tl+ concentrations up to 50 mM and stayed constant thereafter. Different models were considered. The most likely ones related the exponential increase of the gating ratio to ion depletion at the luminal side of the selectivity filter, whereas the influence of [Tl+] on the characteristic voltage of these exponential functions and of the value of I(true) were determined by [Tl+] at the inner side of the selectivity filter or in the cavity.

  5. Interpreting anomalies observed in oxide semiconductor TFTs under negative and positive bias stress

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Jong Woo; Nathan, Arokia, E-mail: an299@cam.ac.uk; Barquinha, Pedro

    2016-08-15

    Oxide semiconductor thin-film transistors can show anomalous behavior under bias stress. Two types of anomalies are discussed in this paper. The first is the shift in threshold voltage (V{sub TH}) in a direction opposite to the applied bias stress, and highly dependent on gate dielectric material. We attribute this to charge trapping/detrapping and charge migration within the gate dielectric. We emphasize the fundamental difference between trapping/detrapping events occurring at the semiconductor/dielectric interface and those occurring at gate/dielectric interface, and show that charge migration is essential to explain the first anomaly. We model charge migration in terms of the non-instantaneous polarizationmore » density. The second type of anomaly is negative V{sub TH} shift under high positive bias stress, with logarithmic evolution in time. This can be argued as electron-donating reactions involving H{sub 2}O molecules or derived species, with a reaction rate exponentially accelerated by positive gate bias and exponentially decreased by the number of reactions already occurred.« less

  6. Enviromental influences on the {sup 137}Cs kinetics of the yellow-bellied turtle (Trachemys Scripta)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, E.L.; Brisbin, L.I. Jr.

    1996-02-01

    Assessments of ecological risk require accurate predictions of contaminant dynamics in natural populations. However, simple deterministic models that assume constant uptake rates and elimination fractions may compromise both their ecological realism and their general application to animals with variable metabolism or diets. In particular, the temperature-dependent model of metabolic rates characteristic of ectotherms may lead to significant differences between observed and predicted contaminant kinetics. We examined the influence of a seasonally variable thermal environment on predicting the uptake and annual cycling of contaminants by ectotherms, using a temperature-dependent model of {sup 137}Cs kinetics in free-living yellow-bellied turtles, Trachemys scripta. Wemore » compared predictions from this model with those of deterministics negative exponential and flexibly shaped Richards sigmoidal models. Concentrations of {sup 137}Cs in a population if this species in Pond B, a radionuclide-contaminated nuclear reactor cooling reservoir, and {sup 137}Cs uptake by the uncontaminated turtles held captive in Pond B for 4 yr confirmed both the pattern of uptake and the equilibrium concentrations predicted by the temperature-dependent model. Almost 90% of the variance on the predicted time-integrated {sup 137}Cs concentration was explainable by linear relationships with model paramaters. The model was also relatively insensitive to uncertainties in the estimates of ambient temperature, suggesting that adequate estimates of temperature-dependent ingestion and elimination may require relatively few measurements of ambient conditions at sites of interest. Analyses of Richards sigmoidal models of {sup 137}Cs uptake indicated significant differences from a negative exponential trajectory in the 1st yr after the turtles` release into Pond B. 76 refs., 7 figs., 5 tabs.« less

  7. Analytical Description of Degradation-Relaxation Transformations in Nanoinhomogeneous Spinel Ceramics.

    PubMed

    Shpotyuk, O; Brunner, M; Hadzaman, I; Balitska, V; Klym, H

    2016-12-01

    Mathematical models of degradation-relaxation kinetics are considered for jammed thick-film systems composed of screen-printed spinel Cu 0.1 Ni 0.1 Co 1.6 Mn 1.2 O 4 and conductive Ag or Ag-Pd alloys. Structurally intrinsic nanoinhomogeneous ceramics due to Ag and Ag-Pd diffusing agents embedded in a spinel phase environment are shown to define governing kinetics of thermally induced degradation under 170 °C obeying an obvious non-exponential behavior in a negative relative resistance drift. The characteristic stretched-to-compressed exponential crossover is detected for degradation-relaxation kinetics in thick-film systems with conductive contacts made of Ag-Pd and Ag alloys. Under essential migration of a conductive phase, Ag penetrates thick-film spinel ceramics via a considerable two-step diffusing process.

  8. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  9. Universal behavior of the interoccurrence times between losses in financial markets: independence of the time resolution.

    PubMed

    Ludescher, Josef; Bunde, Armin

    2014-12-01

    We consider representative financial records (stocks and indices) on time scales between one minute and one day, as well as historical monthly data sets, and show that the distribution P(Q)(r) of the interoccurrence times r between losses below a negative threshold -Q, for fixed mean interoccurrence times R(Q) in multiples of the corresponding time resolutions, can be described on all time scales by the same q exponentials, P(Q)(r)∝1/{[1+(q-1)βr](1/(q-1))}. We propose that the asset- and time-scale-independent analytic form of P(Q)(r) can be regarded as an additional stylized fact of the financial markets and represents a nontrivial test for market models. We analyze the distribution P(Q)(r) as well as the autocorrelation C(Q)(s) of the interoccurrence times for three market models: (i) multiplicative random cascades, (ii) multifractal random walks, and (iii) the generalized autoregressive conditional heteroskedasticity [GARCH(1,1)] model. We find that only one of the considered models, the multifractal random walk model, approximately reproduces the q-exponential form of P(Q)(r) and the power-law decay of C(Q)(s).

  10. Universal behavior of the interoccurrence times between losses in financial markets: Independence of the time resolution

    NASA Astrophysics Data System (ADS)

    Ludescher, Josef; Bunde, Armin

    2014-12-01

    We consider representative financial records (stocks and indices) on time scales between one minute and one day, as well as historical monthly data sets, and show that the distribution PQ(r ) of the interoccurrence times r between losses below a negative threshold -Q , for fixed mean interoccurrence times RQ in multiples of the corresponding time resolutions, can be described on all time scales by the same q exponentials, PQ(r ) ∝1 /{[1+(q -1 ) β r ] 1 /(q -1 )} . We propose that the asset- and time-scale-independent analytic form of PQ(r ) can be regarded as an additional stylized fact of the financial markets and represents a nontrivial test for market models. We analyze the distribution PQ(r ) as well as the autocorrelation CQ(s ) of the interoccurrence times for three market models: (i) multiplicative random cascades, (ii) multifractal random walks, and (iii) the generalized autoregressive conditional heteroskedasticity [GARCH(1,1)] model. We find that only one of the considered models, the multifractal random walk model, approximately reproduces the q -exponential form of PQ(r ) and the power-law decay of CQ(s ) .

  11. Comparison of Traditional and Open-Access Appointment Scheduling for Exponentially Distributed Service Time.

    PubMed

    Yan, Chongjun; Tang, Jiafu; Jiang, Bowen; Fung, Richard Y K

    2015-01-01

    This paper compares the performance measures of traditional appointment scheduling (AS) with those of an open-access appointment scheduling (OA-AS) system with exponentially distributed service time. A queueing model is formulated for the traditional AS system with no-show probability. The OA-AS models assume that all patients who call before the session begins will show up for the appointment on time. Two types of OA-AS systems are considered: with a same-session policy and with a same-or-next-session policy. Numerical results indicate that the superiority of OA-AS systems is not as obvious as those under deterministic scenarios. The same-session system has a threshold of relative waiting cost, after which the traditional system always has higher total costs, and the same-or-next-session system is always preferable, except when the no-show probability or the weight of patients' waiting is low. It is concluded that open-access policies can be viewed as alternative approaches to mitigate the negative effects of no-show patients.

  12. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    PubMed

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  13. Proton transfer to charged platinum electrodes. A molecular dynamics trajectory study.

    PubMed

    Wilhelm, Florian; Schmickler, Wolfgang; Spohr, Eckhard

    2010-05-05

    A recently developed empirical valence bond (EVB) model for proton transfer on Pt(111) electrodes (Wilhelm et al 2008 J. Phys. Chem. C 112 10814) has been applied in molecular dynamics (MD) simulations of a water film in contact with a charged Pt surface. A total of seven negative surface charge densities σ between -7.5 and -18.9 µC cm(-2) were investigated. For each value of σ, between 30 and 84 initial conditions of a solvated proton within a water slab were sampled, and the trajectories were integrated until discharge of a proton occurred on the charged surfaces. We have calculated the mean rates for discharge and for adsorption of solvated protons within the adsorbed water layer in contact with the metal electrode as a function of surface charge density. For the less negative values of σ we observe a Tafel-like exponential increase of discharge rate with decreasing σ. At the more negative values this exponential increase levels off and the discharge process is apparently transport limited. Mechanistically, the Tafel regime corresponds to a stepwise proton transfer: first, a proton is transferred from the bulk into the contact water layer, which is followed by transfer of a proton to the charged surface and concomitant discharge. At the more negative surface charge densities the proton transfer into the contact water layer and the transfer of another proton to the surface and its discharge occur almost simultaneously.

  14. In field damage of high and low cyanogenic cassava due to a generalist insect herbivore Cyrtomenus bergi (Hemiptera: Cydnidae).

    PubMed

    Riis, Lisbeth; Bellotti, Anthony Charles; Castaño, Oscar

    2003-12-01

    The hypothesis that cyanogenic potential in cassava roots deters polyphagous insects in the field is relevant to current efforts to reduce or eliminate the cyanogenic potential in cassava. To test this hypothesis, experiments were conducted in the field under natural selection pressure of the polyphagous root feeder Cyrtomenus bergi Froeschner (Hemiptera: Cydnidae). A number of cassava varieties (33) as well as 13 cassava siblings and their parental clone, each representing a determined level of cyanogenic potential (CNP), were scored for damage caused by C. bergi and related to CNP and nonglycosidic cyanogens, measured as hydrogen cyanide. Additionally, 161 low-CNP varieties (< 50 ppm hydrogen cyanide, fresh weight) from the cassava germplasm core collection at Centro Internacional de Agricultura Tropical (CIAT) were screened for resistance/tolerance to C. bergi. Low root damage scores were registered at all levels of CNP. Nevertheless, CNP and yield (or root size) partly explained the damage in cassava siblings (r2 = 0.82) and different cassava varieties (r2 = 0.42), but only when mean values of damage scores were used. This relation was only significant in one of two crop cycles. A logistic model describes the underlying negative relation between CNP and damage. An exponential model describes the underlying negative relation between root size and damage. Damage, caused by C. bergi feeding, released nonglycosidic cyanogens, and an exponential model fits the underlying positive relation. Fifteen low-CNP clones were selected for potential resistance/tolerance against C. bergi.

  15. Multi-model Analysis of Diffusion-weighted Imaging of Normal Testes at 3.0 T: Preliminary Findings.

    PubMed

    Min, Xiangde; Feng, Zhaoyan; Wang, Liang; Cai, Jie; Li, Basen; Ke, Zan; Zhang, Peipei; You, Huijuan; Yan, Xu

    2018-04-01

    This study aimed to establish diffusion quantitative parameters (apparent diffusion coefficient [ADC], DDC, α, D app , and K app ) in normal testes at 3.0 T. Sixty-four healthy volunteers in two age groups (A: 10-39 years; B: ≥ 40 years) underwent diffusion-weighted imaging scanning at 3.0 T. ADC 1000 , ADC 2000 , ADC 3000 , DDC, α, D app , and K app were calculated using the mono-exponential, stretched-exponential, and kurtosis models. The correlations between parameters and the age were analyzed. The parameters were compared between the age groups and between the right and the left testes. The average ADC 1000 , ADC 2000 , ADC 3000 , DDC, α, D app , and K app values did not significantly differ between the right and the left testes (P > .05 for all). The following significant correlations were found: positive correlations between age and testicular ADC 1000 , ADC 2000 , ADC 3000 , DDC, and D app (r = 0.516, 0.518, 0.518, 0.521, and 0.516, respectively; P < .01 for all) and negative correlations between age and testicular α and K app (r = -0.363, -0.427, respectively; P < .01 for both). Compared to group B, in group A, ADC 1000 , ADC 2000 , ADC 3000 , DDC, and D app were significantly lower (P < .05 for all), but α and K app were significantly higher (P < .05 for both). Our study demonstrated the applicability of the testicular mono-exponential, stretched-exponential, and kurtosis models. Our results can help establish a baseline for the normal testicular parameters in these diffusion models. The contralateral normal testis can serve as a suitable reference for evaluating the abnormalities of the other side. The effect of age on these parameters requires further attention. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  16. On the Prony series representation of stretched exponential relaxation

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Mauro, Yihong Z.

    2018-09-01

    Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.

  17. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  18. Statistical assessment of bi-exponential diffusion weighted imaging signal characteristics induced by intravoxel incoherent motion in malignant breast tumors

    PubMed Central

    Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.

    2016-01-01

    Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078

  19. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    PubMed

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Exponential Decay of Dispersion-Managed Solitons for General Dispersion Profiles

    NASA Astrophysics Data System (ADS)

    Green, William R.; Hundertmark, Dirk

    2016-02-01

    We show that any weak solution of the dispersion management equation describing dispersion-managed solitons together with its Fourier transform decay exponentially. This strong regularity result extends a recent result of Erdoğan, Hundertmark, and Lee in two directions, to arbitrary non-negative average dispersion and, more importantly, to rather general dispersion profiles, which cover most, if not all, physically relevant cases.

  1. Psychophysics of time perception and intertemporal choice models

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2008-03-01

    Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.

  2. Non-thermal plasma destruction of allyl alcohol in waste gas: kinetics and modelling

    NASA Astrophysics Data System (ADS)

    DeVisscher, A.; Dewulf, J.; Van Durme, J.; Leys, C.; Morent, R.; Van Langenhove, H.

    2008-02-01

    Non-thermal plasma treatment is a promising technique for the destruction of volatile organic compounds in waste gas. A relatively unexplored technique is the atmospheric negative dc multi-pin-to-plate glow discharge. This paper reports experimental results of allyl alcohol degradation and ozone production in this type of plasma. A new model was developed to describe these processes quantitatively. The model contains a detailed chemical degradation scheme, and describes the physics of the plasma by assuming that the fraction of electrons that takes part in chemical reactions is an exponential function of the reduced field. The model captured the experimental kinetic data to less than 2 ppm standard deviation.

  3. Suppression of superconductivity in disordered interacting wires.

    PubMed

    Pesin, D A; Andreev, A V

    2006-09-15

    We study superconductivity suppression due to thermal fluctuations in disordered wires using the replica nonlinear sigma-model (NLsigmaM). We show that in addition to the thermal phase slips there is another type of fluctuations that result in a finite resistivity. These fluctuations are described by saddle points in NLsigmaM and cannot be treated within the Ginzburg-Landau approach. The contribution of such fluctuations to the wire resistivity is evaluated with exponential accuracy. The magnetoresistance associated with this contribution is negative.

  4. Stretched-to-compressed-exponential crossover observed in the electrical degradation kinetics of some spinel-metallic screen-printed structures

    NASA Astrophysics Data System (ADS)

    Balitska, V.; Shpotyuk, O.; Brunner, M.; Hadzaman, I.

    2018-02-01

    Thermally-induced (170 °C) degradation-relaxation kinetics is examined in screen-printed structures composed of spinel Cu0.1Ni0.1Co1.6Mn1.2O4 ceramics with conductive Ag or Ag-Pd layered electrodes. Structural inhomogeneities due to Ag and Ag-Pd diffusants in spinel phase environment play a decisive role in non-exponential kinetics of negative relative resistance drift. If Ag migration in spinel is inhibited by Pd addition due to Ag-Pd alloy, the kinetics attains stretched exponential behavior with ∼0.58 exponent, typical for one-stage diffusion in structurally-dispersive media. Under deep Ag penetration into spinel ceramics, as for thick films with Ag-layered electrodes, the degradation kinetics drastically changes, attaining features of two-step diffusing process governed by compressed-exponential dependence with power index of ∼1.68. Crossover from stretched- to compressed-exponential kinetics in spinel-metallic structures is mapped on free energy landscape of non-barrier multi-well system under strong perturbation from equilibrium, showing transition with a character downhill scenario resulting in faster than exponential decaying.

  5. A Modified Tri-Exponential Model for Multi-b-value Diffusion-Weighted Imaging: A Method to Detect the Strictly Diffusion-Limited Compartment in Brain

    PubMed Central

    Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao

    2018-01-01

    Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599

  6. Interspike interval correlation in a stochastic exponential integrate-and-fire model with subthreshold and spike-triggered adaptation.

    PubMed

    Shiau, LieJune; Schwalger, Tilo; Lindner, Benjamin

    2015-06-01

    We study the spike statistics of an adaptive exponential integrate-and-fire neuron stimulated by white Gaussian current noise. We derive analytical approximations for the coefficient of variation and the serial correlation coefficient of the interspike interval assuming that the neuron operates in the mean-driven tonic firing regime and that the stochastic input is weak. Our result for the serial correlation coefficient has the form of a geometric sequence and is confirmed by the comparison to numerical simulations. The theory predicts various patterns of interval correlations (positive or negative at lag one, monotonically decreasing or oscillating) depending on the strength of the spike-triggered and subthreshold components of the adaptation current. In particular, for pure subthreshold adaptation we find strong positive ISI correlations that are usually ascribed to positive correlations in the input current. Our results i) provide an alternative explanation for interspike-interval correlations observed in vivo, ii) may be useful in fitting point neuron models to experimental data, and iii) may be instrumental in exploring the role of adaptation currents for signal detection and signal transmission in single neurons.

  7. Investigation of hyperelastic models for nonlinear elastic behavior of demineralized and deproteinized bovine cortical femur bone.

    PubMed

    Hosseinzadeh, M; Ghoreishi, M; Narooei, K

    2016-06-01

    In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A Stochastic Super-Exponential Growth Model for Population Dynamics

    NASA Astrophysics Data System (ADS)

    Avila, P.; Rekker, A.

    2010-11-01

    A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.

  9. Is the shape of the decline in risk following quitting smoking similar for squamous cell carcinoma and adenocarcinoma of the lung? A quantitative review using the negative exponential model.

    PubMed

    Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J

    2015-06-01

    One possible contributor to the reported rise in the ratio of adenocarcinoma to squamous cell carcinoma of the lung may be differences in the pattern of decline in risk following quitting for the two lung cancer types. Earlier, using data from 85 studies comparing overall lung cancer risks in current smokers, quitters (by time quit) and never smokers, we fitted the negative exponential model, deriving an estimate of 9.93years for the half-life - the time when the excess risk for quitters compared to never smokers becomes half that for continuing smokers. Here we applied the same techniques to data from 16 studies providing RRs specific for lung cancer type. From the 13 studies where the half-life was estimable for each type, we derived estimates of 11.68 (95% CI 10.22-13.34) for squamous cell carcinoma and 14.45 (11.92-17.52) for adenocarcinoma. The ratio of the half-lives was estimated as 1.32 (95% CI 1.20-1.46, p<0.001). The slower decline in quitters for adenocarcinoma, evident in subgroups by sex, age and other factors, may be one of the factors contributing to the reported rise in the ratio of adenocarcinoma to squamous cell carcinoma. Others include changes in the diagnosis and classification of lung cancer. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. A comparative assessment of preclinical chemotherapeutic response of tumors using quantitative non-Gaussian diffusion MRI

    PubMed Central

    Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.

    2016-01-01

    Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785

  11. Proceedings of the Third International Workshop on Multistrategy Learning, May 23-25 Harpers Ferry, WV.

    DTIC Science & Technology

    1996-09-16

    approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in

  12. Use of the Exponential and Exponentiated Demand Equations to Assess the Behavioral Economics of Negative Reinforcement

    PubMed Central

    Fragale, Jennifer E. C.; Beck, Kevin D.; Pang, Kevin C. H.

    2017-01-01

    Abnormal motivation and hedonic assessment of aversive stimuli are symptoms of anxiety and depression. Symptoms influenced by motivation and anhedonia predict treatment success or resistance. Therefore, a translational approach to the study of negatively motivated behaviors is needed. We describe a novel use of behavioral economics demand curve analysis to investigate negative reinforcement in animals that separates hedonic assessment of footshock termination (i.e., relief) from motivation to escape footshock. In outbred Sprague Dawley (SD) rats, relief increased as shock intensity increased. Likewise, motivation to escape footshock increased as shock intensity increased. To demonstrate the applicability to anxiety disorders, hedonic and motivational components of negative reinforcement were investigated in anxiety vulnerable Wistar Kyoto (WKY) rats. WKY rats demonstrated increased motivation for shock cessation with no difference in relief as compared to control SD rats, consistent with a negative bias for motivation in anxiety vulnerability. Moreover, motivation was positively correlated with relief in SD, but not in WKY. This study is the first to assess the hedonic and motivational components of negative reinforcement using behavioral economic analysis. This procedure can be used to investigate positive and negative reinforcement in humans and animals to gain a better understanding of the importance of motivated behavior in stress-related disorders. PMID:28270744

  13. Exponential model for option prices: Application to the Brazilian market

    NASA Astrophysics Data System (ADS)

    Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.

    2016-03-01

    In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.

  14. Possible stretched exponential parametrization for humidity absorption in polymers.

    PubMed

    Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O

    2009-04-01

    Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.

  15. Negative Correlations in Visual Cortical Networks

    PubMed Central

    Chelaru, Mircea I.; Dragoi, Valentin

    2016-01-01

    The amount of information encoded by cortical circuits depends critically on the capacity of nearby neurons to exhibit trial-to-trial (noise) correlations in their responses. Depending on their sign and relationship to signal correlations, noise correlations can either increase or decrease the population code accuracy relative to uncorrelated neuronal firing. Whereas positive noise correlations have been extensively studied using experimental and theoretical tools, the functional role of negative correlations in cortical circuits has remained elusive. We addressed this issue by performing multiple-electrode recording in the superficial layers of the primary visual cortex (V1) of alert monkey. Despite the fact that positive noise correlations decayed exponentially with the difference in the orientation preference between cells, negative correlations were uniformly distributed across the population. Using a statistical model for Fisher Information estimation, we found that a mild increase in negative correlations causes a sharp increase in network accuracy even when mean correlations were held constant. To examine the variables controlling the strength of negative correlations, we implemented a recurrent spiking network model of V1. We found that increasing local inhibition and reducing excitation causes a decrease in the firing rates of neurons while increasing the negative noise correlations, which in turn increase the population signal-to-noise ratio and network accuracy. Altogether, these results contribute to our understanding of the neuronal mechanism involved in the generation of negative correlations and their beneficial impact on cortical circuit function. PMID:25217468

  16. Friends or Foes? Relational Dissonance and Adolescent Psychological Wellbeing

    PubMed Central

    Bond, Lyndal; Lusher, Dean; Williams, Ian; Butler, Helen

    2014-01-01

    The interaction of positive and negative relationships (i.e. I like you, but you dislike me – referred to as relational dissonance) is an underexplored phenomenon. Further, it is often only poor (or negative) mental health that is examined in relation to social networks, with little regard for positive psychological wellbeing. Finally, these issues are compounded by methodological constraints. This study explores a new concept of relational dissonance alongside mutual antipathies and friendships and their association with mental health using multivariate exponential random graph models with an Australian sample of secondary school students. Results show male students with relationally dissonant ties have lower positive mental health measures. Girls with relationally dissonant ties have lower depressed mood, but those girls being targeted by negative ties are more likely to have depressed mood. These findings have implications for the development of interventions focused on promoting adolescent wellbeing and consideration of the appropriate measurement of wellbeing and mental illness. PMID:24498257

  17. Universality in stochastic exponential growth.

    PubMed

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  18. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  19. Tropisms of Avena coleoptiles: sine law for gravitropism, exponential law for photogravitropic equilibrium.

    PubMed

    Galland, Paul

    2002-09-01

    The quantitative relation between gravitropism and phototropism was analyzed for light-grown coleoptiles of Avena sativa (L.). With respect to gravitropism the coleoptiles obeyed the sine law. To study the interaction between light and gravity, coleoptiles were inclined at variable angles and irradiated for 7 h with unilateral blue light (466 nm) impinging at right angles relative to the axis of the coleoptile. The phototropic stimulus was applied from the side opposite to the direction of gravitropic bending. The fluence rate that was required to counteract the negative gravitropism increased exponentially with the sine of the inclination angle. To achieve balance, a linear increase in the gravitropic stimulus required compensation by an exponential increase in the counteracting phototropic stimulus. The establishment of photogravitropic equilibrium during continuous unilateral irradiation is thus determined by two different laws: the well-known sine law for gravitropism and a novel exponential law for phototropism described in this work.

  20. Two Components of Voltage-Dependent Inactivation in Cav1.2 Channels Revealed by Its Gating Currents

    PubMed Central

    Ferreira, Gonzalo; Ríos, Eduardo; Reyes, Nicolás

    2003-01-01

    Voltage-dependent inactivation (VDI) was studied through its effects on the voltage sensor in Cav1.2 channels expressed in tsA 201 cells. Two kinetically distinct phases of VDI in onset and recovery suggest the presence of dual VDI processes. Upon increasing duration of conditioning depolarizations, the half-distribution potential (V1/2) of intramembranous mobile charge was negatively shifted as a sum of two exponential terms, with time constants 0.5 s and 4 s, and relative amplitudes near 50% each. This kinetics behavior was consistent with that of increment of maximal charge related to inactivation (Qn). Recovery from inactivation was also accompanied by a reduction of Qn that varied with recovery time as a sum of two exponentials. The amplitudes of corresponding exponential terms were strongly correlated in onset and recovery, indicating that channels recover rapidly from fast VDI and slowly from slow VDI. Similar to charge “immobilization,” the charge moved in the repolarization (OFF) transient became slower during onset of fast VDI. Slow VDI had, instead, hallmarks of interconversion of charge. Confirming the mechanistic duality, fast VDI virtually disappeared when Li+ carried the current. A nine-state model with parallel fast and slow inactivation pathways from the open state reproduces most of the observations. PMID:12770874

  1. Phenomenology of stochastic exponential growth

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  2. Parameter estimation and order selection for an empirical model of VO2 on-kinetics.

    PubMed

    Alata, O; Bernard, O

    2007-04-27

    In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.

  3. Is it growing exponentially fast? -- Impact of assuming exponential growth for characterizing and forecasting epidemics with initial near-exponential growth dynamics.

    PubMed

    Chowell, Gerardo; Viboud, Cécile

    2016-10-01

    The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.

  4. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice

    PubMed Central

    Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.

    2015-01-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615

  5. Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.

    PubMed

    Capozziello, S; Lambiase, G; Saridakis, E N

    2017-01-01

    We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.

  6. The matrix exponential in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    1987-01-01

    The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.

  7. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  8. Investigation of non-Gaussian effects in the Brazilian option market

    NASA Astrophysics Data System (ADS)

    Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.

    2018-04-01

    An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.

  9. Separating OR, SUM, and XOR Circuits.

    PubMed

    Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H

    2016-08-01

    Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O ( n ), but require SUM-circuits of size Ω( n 3/2 /log 2 n ).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis.

  10. Power law incidence rate in epidemic models. Comment on: "Mathematical models to characterize early epidemic growth: A review" by Gerardo Chowell et al.

    NASA Astrophysics Data System (ADS)

    Allen, Linda J. S.

    2016-09-01

    Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,

  11. A Simulation To Model Exponential Growth.

    ERIC Educational Resources Information Center

    Appelbaum, Elizabeth Berman

    2000-01-01

    Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)

  12. Two-stage unified stretched-exponential model for time-dependence of threshold voltage shift under positive-bias-stresses in amorphous indium-gallium-zinc oxide thin-film transistors

    NASA Astrophysics Data System (ADS)

    Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In

    2017-08-01

    In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.

  13. Singularities in loop quantum cosmology.

    PubMed

    Cailleteau, Thomas; Cardoso, Antonio; Vandersloot, Kevin; Wands, David

    2008-12-19

    We show that simple scalar field models can give rise to curvature singularities in the effective Friedmann dynamics of loop quantum cosmology (LQC). We find singular solutions for spatially flat Friedmann-Robertson-Walker cosmologies with a canonical scalar field and a negative exponential potential, or with a phantom scalar field and a positive potential. While LQC avoids big bang or big rip type singularities, we find sudden singularities where the Hubble rate is bounded, but the Ricci curvature scalar diverges. We conclude that the effective equations of LQC are not in themselves sufficient to avoid the occurrence of curvature singularities.

  14. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  15. Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time

    NASA Astrophysics Data System (ADS)

    Himeoka, Yusuke; Kaneko, Kunihiko

    2017-04-01

    The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.

  16. Explorations in dark energy

    NASA Astrophysics Data System (ADS)

    Bozek, Brandon

    This dissertation describes three research projects on the topic of dark energy. The first project is an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their "w 0--wa" parameterization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which can not be distinguished from a cosmological constant at DETF Stage 2, and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The second project details this analysis on a Inverse Power Law (IPL) or "Ratra-Peebles" (RP) model. This model is a member of a popular subset of scalar field quintessence models that exhibit "tracking" behavior that make this model particularly theoretically interesting. We find that the relative increase in constraining power on the parameter space of this model is consistent to what was found in the first project and the DETF report. We also show, using a background cosmology based on an IPL scalar field model that is consistent with a cosmological constant with Stage 2 data, that good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The third project extends the Causal Entropic Principle to predict the preferred curvature within the "multiverse". The Causal Entropic Principle (Bousso, et al.) provides an alternative approach to anthropic attempts to predict our observed value of the cosmological constant by calculating the entropy created within a causal diamond. We have found that values larger than rhok = 40rho m are disfavored by more than 99.99% and a peak value at rho Λ = 7.9 x 10-123 and rho k = 4.3rhom for open universes. For universes that allow only positive curvature or both positive and negative curvature, we find a correlation between curvature and dark energy that leads to an extended region of preferred values. Our universe is found to be disfavored to an extent depending the priors on curvature. We also provide a comparison to previous anthropic constraints on open universes and discuss future directions for this work.

  17. Item Response Theory to Quantify Longitudinal Placebo and Paliperidone Effects on PANSS Scores in Schizophrenia.

    PubMed

    Krekels, Ehj; Novakovic, A M; Vermeulen, A M; Friberg, L E; Karlsson, M O

    2017-08-01

    As biomarkers are lacking, multi-item questionnaire-based tools like the Positive and Negative Syndrome Scale (PANSS) are used to quantify disease severity in schizophrenia. Analyzing composite PANSS scores as continuous data discards information and violates the numerical nature of the scale. Here a longitudinal analysis based on Item Response Theory is presented using PANSS data from phase III clinical trials. Latent disease severity variables were derived from item-level data on the positive, negative, and general PANSS subscales each. On all subscales, the time course of placebo responses were best described with Weibull models, and dose-independent functions with exponential models to describe the onset of the full effect were used to describe paliperidone's effect. Placebo and drug effect were most pronounced on the positive subscale. The final model successfully describes the time course of treatment effects on the individual PANSS item-levels, on all PANSS subscale levels, and on the total score level. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  18. Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun

    2014-02-01

    We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.

  19. Exponential bound in the quest for absolute zero

    NASA Astrophysics Data System (ADS)

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  20. Exponential bound in the quest for absolute zero.

    PubMed

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  1. Self-charging of identical grains in the absence of an external field.

    PubMed

    Yoshimatsu, R; Araújo, N A M; Wurm, G; Herrmann, H J; Shinbrot, T

    2017-01-06

    We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.

  2. Self-charging of identical grains in the absence of an external field

    NASA Astrophysics Data System (ADS)

    Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.

    2017-01-01

    We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.

  3. Something from nothing: self-charging of identical grains

    NASA Astrophysics Data System (ADS)

    Shinbrot, Troy; Yoshimatsu, Ryuta; Nuno Araujo, Nuno; Wurm, Gerhard; Herrmann, Hans

    We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. I acknowledge support from NSF/DMR, award 1404792.

  4. Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model

    NASA Astrophysics Data System (ADS)

    Al Sobhi, Mashail M.

    2015-02-01

    Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

  5. State of charge modeling of lithium-ion batteries using dual exponential functions

    NASA Astrophysics Data System (ADS)

    Kuo, Ting-Jung; Lee, Kung-Yen; Huang, Chien-Kang; Chen, Jau-Horng; Chiu, Wei-Li; Huang, Chih-Fang; Wu, Shuen-De

    2016-05-01

    A mathematical model is developed by fitting the discharging curve of LiFePO4 batteries and used to investigate the relationship between the state of charge and the closed-circuit voltage. The proposed mathematical model consists of dual exponential terms and a constant term which can fit the characteristics of dual equivalent RC circuits closely, representing a LiFePO4 battery. One exponential term presents the stable discharging behavior and the other one presents the unstable discharging behavior and the constant term presents the cut-off voltage.

  6. Self-charging of identical grains in the absence of an external field

    PubMed Central

    Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.

    2017-01-01

    We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. PMID:28059124

  7. Piecewise exponential models to assess the influence of job-specific experience on the hazard of acute injury for hourly factory workers

    PubMed Central

    2013-01-01

    Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648

  8. A Simulation of the ECSS Help Desk with the Erlang a Model

    DTIC Science & Technology

    2011-03-01

    a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au

  9. Method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  10. Detecting biodiversity hotspots by species-area relationships: a case study of Mediterranean beetles.

    PubMed

    Fattorini, Simone

    2006-08-01

    Any method of identifying hotspots should take into account the effect of area on species richness. I examined the importance of the species-area relationship in determining tenebrionid (Coleoptera: Tenebrionidae) hotspots on the Aegean Islands (Greece). Thirty-two islands and 170 taxa (species and subspecies) were included in this study. I tested several species-area relationship models with linear and nonlinear regressions, including power exponential, negative exponential, logistic, Gompertz, Weibull, Lomolino, and He-Legendre functions. Islands with positive residuals were identified as hotspots. I also analyzed the values of the C parameter of the power function and the simple species-area ratios. Species richness was significantly correlated with island area for all models. The power function model was the most convenient one. Most functions, however identified certain islands as hotspots. The importance of endemics in insular biotas should be evaluated carefully because they are of high conservation concern. The simple use of the species-area relationship can be problematic when areas with no endemics are included. Therefore the importance of endemics should be evaluated according to different methods, such as percentages, to take into account different levels of endemism and different kinds of "endemics" (e.g., endemic to single islands vs. endemic to the archipelago). Because the species-area relationship is a key pattern in ecology, my findings can be applied at broader scales.

  11. Separating OR, SUM, and XOR Circuits☆

    PubMed Central

    Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H.

    2017-01-01

    Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O(n), but require SUM-circuits of size Ω(n3/2/log2n).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis. PMID:28529379

  12. A cross-sectional controlled developmental study of neuropsychological functions in patients with glutaric aciduria type I.

    PubMed

    Boy, Nikolas; Heringer, Jana; Haege, Gisela; Glahn, Esther M; Hoffmann, Georg F; Garbade, Sven F; Kölker, Stefan; Burgard, Peter

    2015-12-22

    Glutaric aciduria type I (GA-I) is an inherited metabolic disease due to deficiency of glutaryl-CoA dehydrogenase (GCDH). Cognitive functions are generally thought to be spared, but have not yet been studied in detail. Thirty patients detected by newborn screening (n = 13), high-risk screening (n = 3) or targeted metabolic testing (n = 14) were studied for simple reaction time (SRT), continuous performance (CP), visual working memory (VWM), visual-motor coordination (Tracking) and visual search (VS). Dystonia (n = 13 patients) was categorized using the Barry-Albright-Dystonia Scale (BADS). Patients were compared with 196 healthy controls. Developmental functions of cognitive performances were analysed using a negative exponential function model. BADS scores correlated with speed tests but not with tests measuring stability or higher cognitive functions without time constraints. Developmental functions of GA-I patients significantly differed from controls for SRT and VS but not for VWM and showed obvious trends for CP and Tracking. Dystonic patients were slower in SRT and CP but reached their asymptote of performance similar to asymptomatic patients and controls in all tests. Asymptomatic patients did not differ from controls, except showing significantly better results in Tracking and a trend for slower reactions in visual search. Data across all age groups of patients and controls fitted well to a model of negative exponential development. Dystonic patients predominantly showed motor speed impairment, whereas performance improved with higher cognitive load. Patients without motor symptoms did not differ from controls. Developmental functions of cognitive performances were similar in patients and controls. Performance in tests with higher cognitive demand might be preserved in GA-I, even in patients with striatal degeneration.

  13. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  14. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  15. Effect of water-based recovery on blood lactate removal after high-intensity exercise.

    PubMed

    Lucertini, Francesco; Gervasi, Marco; D'Amen, Giancarlo; Sisti, Davide; Rocchi, Marco Bruno Luigi; Stocchi, Vilberto; Benelli, Piero

    2017-01-01

    This study assessed the effectiveness of water immersion to the shoulders in enhancing blood lactate removal during active and passive recovery after short-duration high-intensity exercise. Seventeen cyclists underwent active water- and land-based recoveries and passive water and land-based recoveries. The recovery conditions lasted 31 minutes each and started after the identification of each cyclist's blood lactate accumulation peak, induced by a 30-second all-out sprint on a cycle ergometer. Active recoveries were performed on a cycle ergometer at 70% of the oxygen consumption corresponding to the lactate threshold (the control for the intensity was oxygen consumption), while passive recoveries were performed with subjects at rest and seated on the cycle ergometer. Blood lactate concentration was measured 8 times during each recovery condition and lactate clearance was modeled over a negative exponential function using non-linear regression. Actual active recovery intensity was compared to the target intensity (one sample t-test) and passive recovery intensities were compared between environments (paired sample t-tests). Non-linear regression parameters (coefficients of the exponential decay of lactate; predicted resting lactates; predicted delta decreases in lactate) were compared between environments (linear mixed model analyses for repeated measures) separately for the active and passive recovery modes. Active recovery intensities did not differ significantly from the target oxygen consumption, whereas passive recovery resulted in a slightly lower oxygen consumption when performed while immersed in water rather than on land. The exponential decay of blood lactate was not significantly different in water- or land-based recoveries in either active or passive recovery conditions. In conclusion, water immersion at 29°C would not appear to be an effective practice for improving post-exercise lactate removal in either the active or passive recovery modes.

  16. New dimensions for wound strings: The modular transformation of geometry to topology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGreevy, John; Silverstein, Eva; Starr, David

    2007-02-15

    We show, using a theorem of Milnor and Margulis, that string theory on compact negatively curved spaces grows new effective dimensions as the space shrinks, generalizing and contextualizing the results in E. Silverstein, Phys. Rev. D 73, 086004 (2006).. Milnor's theorem relates negative sectional curvature on a compact Riemannian manifold to exponential growth of its fundamental group, which translates in string theory to a higher effective central charge arising from winding strings. This exponential density of winding modes is related by modular invariance to the infrared small perturbation spectrum. Using self-consistent approximations valid at large radius, we analyze this correspondencemore » explicitly in a broad set of time-dependent solutions, finding precise agreement between the effective central charge and the corresponding infrared small perturbation spectrum. This indicates a basic relation between geometry, topology, and dimensionality in string theory.« less

  17. Generation of net sediment transport by velocity skewness in oscillatory sheet flow

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Li, Yong; Chen, Genfa; Wang, Fujun; Tang, Xuelin

    2018-01-01

    This study utilizes a qualitative approach and a two-phase numerical model to investigate net sediment transport caused by velocity skewness beneath oscillatory sheet flow and current. The qualitative approach is derived based on the pseudo-laminar approximation of boundary layer velocity and exponential approximation of concentration. The two-phase model can obtain well the instantaneous erosion depth, sediment flux, boundary layer thickness, and sediment transport rate. It can especially illustrate the difference between positive and negative flow stages caused by velocity skewness, which is considerably important in determining the net boundary layer flow and sediment transport direction. The two-phase model also explains the effect of sediment diameter and phase-lag to sediment transport by comparing the instantaneous-type formulas to better illustrate velocity skewness effect. In previous studies about sheet flow transport in pure velocity-skewed flows, net sediment transport is only attributed to the phase-lag effect. In the present study with the qualitative approach and two-phase model, phase-lag effect is shown important but not sufficient for the net sediment transport beneath pure velocity-skewed flow and current, while the asymmetric wave boundary layer development between positive and negative flow stages also contributes to the sediment transport.

  18. Wave propagation model of heat conduction and group speed

    NASA Astrophysics Data System (ADS)

    Zhang, Long; Zhang, Xiaomin; Peng, Song

    2018-03-01

    In view of the finite relaxation model of non-Fourier's law, the Cattaneo and Vernotte (CV) model and Fourier's law are presented in this work for comparing wave propagation modes. Independent variable translation is applied to solve the partial differential equation. Results show that the general form of the time spatial distribution of temperature for the three media comprises two solutions: those corresponding to the positive and negative logarithmic heating rates. The former shows that a group of heat waves whose spatial distribution follows the exponential function law propagates at a group speed; the speed of propagation is related to the logarithmic heating rate. The total speed of all the possible heat waves can be combined to form the group speed of the wave propagation. The latter indicates that the spatial distribution of temperature, which follows the exponential function law, decays with time. These features show that propagation accelerates when heated and decelerates when cooled. For the model media that follow Fourier's law and correspond to the positive heat rate of heat conduction, the propagation mode is also considered the propagation of a group of heat waves because the group speed has no upper bound. For the finite relaxation model with non-Fourier media, the interval of group speed is bounded and the maximum speed can be obtained when the logarithmic heating rate is exactly the reciprocal of relaxation time. And for the CV model with a non-Fourier medium, the interval of group speed is also bounded and the maximum value can be obtained when the logarithmic heating rate is infinite.

  19. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  20. Small regulatory RNA-induced growth rate heterogeneity of Bacillus subtilis.

    PubMed

    Mars, Ruben A T; Nicolas, Pierre; Ciccolini, Mariano; Reilman, Ewoud; Reder, Alexander; Schaffer, Marc; Mäder, Ulrike; Völker, Uwe; van Dijl, Jan Maarten; Denham, Emma L

    2015-03-01

    Isogenic bacterial populations can consist of cells displaying heterogeneous physiological traits. Small regulatory RNAs (sRNAs) could affect this heterogeneity since they act by fine-tuning mRNA or protein levels to coordinate the appropriate cellular behavior. Here we show that the sRNA RnaC/S1022 from the Gram-positive bacterium Bacillus subtilis can suppress exponential growth by modulation of the transcriptional regulator AbrB. Specifically, the post-transcriptional abrB-RnaC/S1022 interaction allows B. subtilis to increase the cell-to-cell variation in AbrB protein levels, despite strong negative autoregulation of the abrB promoter. This behavior is consistent with existing mathematical models of sRNA action, thus suggesting that induction of protein expression noise could be a new general aspect of sRNA regulation. Importantly, we show that the sRNA-induced diversity in AbrB levels generates heterogeneity in growth rates during the exponential growth phase. Based on these findings, we hypothesize that the resulting subpopulations of fast- and slow-growing B. subtilis cells reflect a bet-hedging strategy for enhanced survival of unfavorable conditions.

  1. Mathematical Modeling of Extinction of Inhomogeneous Populations

    PubMed Central

    Karev, G.P.; Kareva, I.

    2016-01-01

    Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117

  2. Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.

    ERIC Educational Resources Information Center

    Mandell, Marvin B.; Bretschneider, Stuart I.

    1984-01-01

    The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)

  3. A new mechanistic growth model for simultaneous determination of lag phase duration and exponential growth rate and a new Belehdradek-type model for evaluating the effect of temperature on growth rate

    USDA-ARS?s Scientific Manuscript database

    A new mechanistic growth model was developed to describe microbial growth under isothermal conditions. The new mathematical model was derived from the basic observation of bacterial growth that may include lag, exponential, and stationary phases. With this model, the lag phase duration and exponen...

  4. Anomalous NMR Relaxation in Cartilage Matrix Components and Native Cartilage: Fractional-Order Models

    PubMed Central

    Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-01-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095

  5. Anomalous NMR relaxation in cartilage matrix components and native cartilage: Fractional-order models

    NASA Astrophysics Data System (ADS)

    Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-06-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.

  6. Teaching the Verhulst Model: A Teaching Experiment in Covariational Reasoning and Exponential Growth

    ERIC Educational Resources Information Center

    Castillo-Garsow, Carlos

    2010-01-01

    Both Thompson and the duo of Confrey and Smith describe how students might be taught to build "ways of thinking" about exponential behavior by coordinating the covariation of two changing quantities, however, these authors build exponential behavior from different meanings of covariation. Confrey and Smith advocate beginning with discrete additive…

  7. Review of "Going Exponential: Growing the Charter School Sector's Best"

    ERIC Educational Resources Information Center

    Garcia, David

    2011-01-01

    This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…

  8. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  9. Correlation between the change in the kinetics of the ribosomal RNA rrnB P2 promoter and the transition from lag to exponential phase with Pseudomonas fluorescens.

    PubMed

    McKellar, Robin C

    2008-01-15

    Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P

  10. An experimental comparison of several current viscoplastic constitutive models at elevated temperature

    NASA Technical Reports Server (NTRS)

    James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.

  11. Non-perturbative reheating and Nnaturalness

    NASA Astrophysics Data System (ADS)

    Hardy, Edward

    2017-11-01

    We study models in which reheating happens only through non-perturbative processes. The energy transferred can be exponentially suppressed unless the inflaton is coupled to a particle with a parametrically small mass. Additionally, in some models a light scalar with a negative mass squared parameter leads to much more efficient reheating than one with a positive mass squared of the same magnitude. If a theory contains many sectors similar to the Standard Model coupled to the inflaton via their Higgses, such dynamics can realise the Nnaturalness solution to the hierarchy problem. A sector containing a light Higgs with a non-zero vacuum expectation value is dominantly reheated and there is little energy transferred to the other sectors, consistent with cosmological constraints. The inflaton must decouple from other particles and have a flat potential at large field values, in which case the visible sector UV cutoff can be raised to 10 TeV in a simple model.

  12. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  13. Exponential Acceleration of VT Seismicity in the Years Prior to Major Eruptions of Basaltic Volcanoes

    NASA Astrophysics Data System (ADS)

    Lengline, O.; Marsan, D.; Got, J.; Pinel, V.

    2007-12-01

    The evolution of the seismicity at three basaltic volcanoes (Kilauea, Mauna-Loa and Piton de la Fournaise) is analysed during phases of magma accumulation. We show that the VT seismicity during these time-periods is characterized by an exponential increase at long-time scale (years). Such an exponential acceleration can be explained by a model of seismicity forced by the replenishment of a magmatic reservoir. The increase in stress in the edifice caused by this replenishment is modeled. This stress history leads to a cumulative number of damage, ie VT earthquakes, following the same exponential increase as found for seismicity. A long-term seismicity precursor is thus detected at basaltic volcanoes. Although this precursory signal is not able to predict the onset times of futures eruptions (as no diverging point is present in the model), it may help mitigating volcanic hazards.

  14. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  15. Changes in speed distribution: Applying aggregated safety effect models to individual vehicle speeds.

    PubMed

    Vadeby, Anna; Forsman, Åsa

    2017-06-01

    This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Modeling the expenditure and reconstitution of work capacity above critical power.

    PubMed

    Skiba, Philip Friere; Chidnok, Weerapong; Vanhatalo, Anni; Jones, Andrew M

    2012-08-01

    The critical power (CP) model includes two constants: the CP and the W' [P = (W' / t) + CP]. The W' is the finite work capacity available above CP. Power output above CP results in depletion of the W' complete depletion of the W' results in exhaustion. Monitoring the W' may be valuable to athletes during training and competition. Our purpose was to develop a function describing the dynamic state of the W' during intermittent exercise. After determination of V˙O(2max), CP, and W', seven subjects completed four separate exercise tests on a cycle ergometer on different days. Each protocol comprised a set of intervals: 60 s at a severe power output, followed by 30-s recovery at a lower prescribed power output. The intervals were repeated until exhaustion. These data were entered into a continuous equation predicting balance of W' remaining, assuming exponential reconstitution of the W'. The time constant was varied by an iterative process until the remaining modeled W' = 0 at the point of exhaustion. The time constants of W' recharge were negatively correlated with the difference between sub-CP recovery power and CP. The relationship was best fit by an exponential (r = 0.77). The model-predicted W' balance correlated with the temporal course of the rise in V˙O(2) (r = 0.82-0.96). The model accurately predicted exhaustion of the W' in a competitive cyclist during a road race. We have developed a function to track the dynamic state of the W' during intermittent exercise. This may have important implications for the planning and real-time monitoring of athletic performance.

  17. Radiofrequency ablation: importance of background tissue electrical conductivity--an agar phantom and computer modeling study.

    PubMed

    Solazzo, Stephanie A; Liu, Zhengjun; Lobo, S Melvyn; Ahmed, Muneeb; Hines-Peralta, Andrew U; Lenkinski, Robert E; Goldberg, S Nahum

    2005-08-01

    To determine whether radiofrequency (RF)-induced heating can be correlated with background electrical conductivity in a controlled experimental phantom environment mimicking different background tissue electrical conductivities and to determine the potential electrical and physical basis for such a correlation by using computer modeling. The effect of background tissue electrical conductivity on RF-induced heating was studied in a controlled system of 80 two-compartment agar phantoms (with inner wells of 0.3%, 1.0%, or 36.0% NaCl) with background conductivity that varied from 0.6% to 5.0% NaCl. Mathematical modeling of the relationship between electrical conductivity and temperatures 2 cm from the electrode (T2cm) was performed. Next, computer simulation of RF heating by using two-dimensional finite-element analysis (ETherm) was performed with parameters selected to approximate the agar phantoms. Resultant heating, in terms of both the T2cm and the distance of defined thermal isotherms from the electrode surface, was calculated and compared with the phantom data. Additionally, electrical and thermal profiles were determined by using the computer modeling data and correlated by using linear regression analysis. For each inner compartment NaCl concentration, a negative exponential relationship was established between increased background NaCl concentration and the T2cm (R2= 0.64-0.78). Similar negative exponential relationships (r2 > 0.97%) were observed for the computer modeling. Correlation values (R2) between the computer and experimental data were 0.9, 0.9, and 0.55 for the 0.3%, 1.0%, and 36.0% inner NaCl concentrations, respectively. Plotting of the electrical field generated around the RF electrode identified the potential for a dramatic local change in electrical field distribution (ie, a second electrical peak ["E-peak"]) occurring at the interface between the two compartments of varied electrical background conductivity. Linear correlations between the E-peak and heating at T2cm (R2= 0.98-1.00) and the 50 degrees C isotherm (R2= 0.99-1.00) were established. These results demonstrate the strong relationship between background tissue conductivity and RF heating and further explain electrical phenomena that occur in a two-compartment system.

  18. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  19. Automatic selection of arterial input function using tri-exponential models

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David

    2009-02-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.

  20. Comparison of kinetic model for biogas production from corn cob

    NASA Astrophysics Data System (ADS)

    Shitophyta, L. M.; Maryudi

    2018-04-01

    Energy demand increases every day, while the energy source especially fossil energy depletes increasingly. One of the solutions to overcome the energy depletion is to provide renewable energies such as biogas. Biogas can be generated by corn cob and food waste. In this study, biogas production was carried out by solid-state anaerobic digestion. The steps of biogas production were the preparation of feedstock, the solid-state anaerobic digestion, and the measurement of biogas volume. This study was conducted on TS content of 20%, 22%, and 24%. The aim of this research was to compare kinetic models of biogas production from corn cob and food waste as a co-digestion using the linear, exponential equation, and first-kinetic models. The result showed that the exponential equation had a better correlation than the linear equation on the ascending graph of biogas production. On the contrary, the linear equation had a better correlation than the exponential equation on the descending graph of biogas production. The correlation values on the first-kinetic model had the smallest value compared to the linear and exponential models.

  1. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  2. [Application of exponential smoothing method in prediction and warning of epidemic mumps].

    PubMed

    Shi, Yun-ping; Ma, Jia-qi

    2010-06-01

    To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.

  3. Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement

    PubMed Central

    Gustman, Alan L.; Steinmeier, Thomas L.

    2012-01-01

    This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946

  4. Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.

    PubMed

    Gustman, Alan L; Steinmeier, Thomas L

    2012-06-01

    This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.

  5. On non-exponential cosmological solutions with two factor spaces of dimensions m and 1 in the Einstein-Gauss-Bonnet model with a Λ-term

    NASA Astrophysics Data System (ADS)

    Ernazarov, K. K.

    2017-12-01

    We consider a (m + 2)-dimensional Einstein-Gauss-Bonnet (EGB) model with the cosmological Λ-term. We restrict the metrics to be diagonal ones and find for certain Λ = Λ(m) class of cosmological solutions with non-exponential time dependence of two scale factors of dimensions m > 2 and 1. Any solution from this class describes an accelerated expansion of m-dimensional subspace and tends asymptotically to isotropic solution with exponential dependence of scale factors.

  6. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  7. MRI quantification of diffusion and perfusion in bone marrow by intravoxel incoherent motion (IVIM) and non-negative least square (NNLS) analysis.

    PubMed

    Marchand, A J; Hitti, E; Monge, F; Saint-Jalmes, H; Guillin, R; Duvauferrier, R; Gambarota, G

    2014-11-01

    To assess the feasibility of measuring diffusion and perfusion fraction in vertebral bone marrow using the intravoxel incoherent motion (IVIM) approach and to compare two fitting methods, i.e., the non-negative least squares (NNLS) algorithm and the more commonly used Levenberg-Marquardt (LM) non-linear least squares algorithm, for the analysis of IVIM data. MRI experiments were performed on fifteen healthy volunteers, with a diffusion-weighted echo-planar imaging (EPI) sequence at five different b-values (0, 50, 100, 200, 600 s/mm2), in combination with an STIR module to suppress the lipid signal. Diffusion signal decays in the first lumbar vertebra (L1) were fitted to a bi-exponential function using the LM algorithm and further analyzed with the NNLS algorithm to calculate the values of the apparent diffusion coefficient (ADC), pseudo-diffusion coefficient (D*) and perfusion fraction. The NNLS analysis revealed two diffusion components only in seven out of fifteen volunteers, with ADC=0.60±0.09 (10(-3) mm(2)/s), D*=28±9 (10(-3) mm2/s) and perfusion fraction=14%±6%. The values obtained by the LM bi-exponential fit were: ADC=0.45±0.27 (10(-3) mm2/s), D*=63±145 (10(-3) mm2/s) and perfusion fraction=27%±17%. Furthermore, the LM algorithm yielded values of perfusion fraction in cases where the decay was not bi-exponential, as assessed by NNLS analysis. The IVIM approach allows for measuring diffusion and perfusion fraction in vertebral bone marrow; its reliability can be improved by using the NNLS, which identifies the diffusion decays that display a bi-exponential behavior. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Dynamic modeling of sludge compaction and consolidation processes in wastewater secondary settling tanks.

    PubMed

    Abusam, A; Keesman, K J

    2009-01-01

    The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.

  9. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  10. Exponential quantum spreading in a class of kicked rotor systems near high-order resonances

    NASA Astrophysics Data System (ADS)

    Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin

    2013-11-01

    Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.

  11. A decades-long fast-rise-exponential-decay flare in low-luminosity AGN NGC 7213

    NASA Astrophysics Data System (ADS)

    Yan, Zhen; Xie, Fu-Guo

    2018-03-01

    We analysed the four-decades-long X-ray light curve of the low-luminosity active galactic nucleus (LLAGN) NGC 7213 and discovered a fast-rise-exponential-decay (FRED) pattern, i.e. the X-ray luminosity increased by a factor of ≈4 within 200 d, and then decreased exponentially with an e-folding time ≈8116 d (≈22.2 yr). For the theoretical understanding of the observations, we examined three variability models proposed in the literature: the thermal-viscous disc instability model, the radiation pressure instability model, and the TDE model. We find that a delayed tidal disruption of a main-sequence star is most favourable; either the thermal-viscous disc instability model or radiation pressure instability model fails to explain some key properties observed, thus we argue them unlikely.

  12. Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model.

    PubMed

    Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S

    2003-10-01

    Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.

  13. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  14. CMB constraints on β-exponential inflationary models

    NASA Astrophysics Data System (ADS)

    Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.

    2018-03-01

    We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.

  15. Firing patterns in the adaptive exponential integrate-and-fire model.

    PubMed

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  16. Landscape movements of Anopheles gambiae malaria vector mosquitoes in rural Gambia.

    PubMed

    Thomas, Christopher J; Cross, Dónall E; Bøgh, Claus

    2013-01-01

    For malaria control in Africa it is crucial to characterise the dispersal of its most efficient vector, Anopheles gambiae, in order to target interventions and assess their impact spatially. Our study is, we believe, the first to present a statistical model of dispersal probability against distance from breeding habitat to human settlements for this important disease vector. We undertook post-hoc analyses of mosquito catches made in The Gambia to derive statistical dispersal functions for An. gambiae sensu lato collected in 48 villages at varying distances to alluvial larval habitat along the River Gambia. The proportion dispersing declined exponentially with distance, and we estimated that 90% of movements were within 1.7 km. Although a 'heavy-tailed' distribution is considered biologically more plausible due to active dispersal by mosquitoes seeking blood meals, there was no statistical basis for choosing it over a negative exponential distribution. Using a simple random walk model with daily survival and movements previously recorded in Burkina Faso, we were able to reproduce the dispersal probabilities observed in The Gambia. Our results provide an important quantification of the probability of An. gambiae s.l. dispersal in a rural African setting typical of many parts of the continent. However, dispersal will be landscape specific and in order to generalise to other spatial configurations of habitat and hosts it will be necessary to produce tractable models of mosquito movements for operational use. We show that simple random walk models have potential. Consequently, there is a pressing need for new empirical studies of An. gambiae survival and movements in different settings to drive this development.

  17. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  18. Non-Gaussian analysis of diffusion weighted imaging in head and neck at 3T: a pilot study in patients with nasopharyngeal carcinoma.

    PubMed

    Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D

    2014-01-01

    To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.

  19. On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution

    NASA Astrophysics Data System (ADS)

    Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein

    2018-07-01

    One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.

  20. On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution

    NASA Astrophysics Data System (ADS)

    Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein

    2017-12-01

    One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.

  1. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  2. Bayesian exponential random graph modelling of interhospital patient referral networks.

    PubMed

    Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro

    2017-08-15

    Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. [Canopy interception of sub-alpine dark coniferous communities in western Sichuan, China].

    PubMed

    Lü, Yu-liang; Liu, Shi-rong; Sun, Peng-sen; Liu, Xing-liang; Zhang, Rui-pu

    2007-11-01

    Based on field measurements of throughfall and stemflow in combination with climatic data collected from the meteorological station adjacent to the studied sub-alpine dark coniferous forest in Wolong, Sichuan Province, canopy interception of sub-alpine dark coniferous forests was analyzed and modeled at both stand scale and catchment scale. The results showed that monthly interception rate of Fargesia nitida, Bashania fangiana--Abies faxoniana old-growth ranged from 33% Grass to 72%, with the average of 48%. In growing season, there was a linear or powerful or exponential relationship between rainfall and interception an. a negative exponential relationship between rainfall and interception rate. The mean maximum canopy interception by the vegetation in the catchment of in.44 km was 1.74 ment and the significant differences among the five communities occurred in the following sequence: Moss-Fargesia nitida, Bashan afanglana-A. faxoniana stand > Grass-F. nitida, B. fangiana-A. faxoniana stand > Moss-Rhododendron spp.-A. faxoniana stand > Grass-Rh. spp.-A. faxoniana stand > Rh. spp. shrub. In addition, a close linear relationship existed between leaf area index (LAI) and maximum canopy interception. The simulated value of canopy interception rate, maximum canopy interception rate and addition interception rate of the vegetation in the catchment were 39%, 25% and 14%, respectively. Simulation of the canopy interception model was better at the overall growing season scale, that the mean relative error was 9%-14%.

  4. Calculating Formulae of Proportion Factor and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang

    2016-07-01

    Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.

  5. Formaldehyde sorption and desorption characteristics of gypsum wallboard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, T.G.; Hawthorne, A.R.; Thompson, C.V.

    1987-07-01

    The sorption and subsequent desorption of formaldehyde (CH/sub 2/O) vapor from unpainted gypsum wallboard have been investigated in environmental chamber experiments conducted at 23 /sup 0/C, 50% relative humidity, an air exchange to board loading ratio of 0.43 m/h, and CH/sub 2/O concentrations ranging from 0 to 0.50 mg/m/sup 3/. Both CH/sub 2/O sorption and CH/sub 2/O desorption processes are described by a three-parameter, single-exponential model with an exponential lifetime of 2.9 +/- 0.1 days. The storage capacity of gypsum board for CH/sub 2/O vapor results in a time-dependent buffer to changes in CH/sub 2/O vapor concentration surrounding the boardmore » but appears to cause only a weak, permanent loss mechanism for CH/sub 2/O vapor. Prior to significant depletion of sorbed CH/sub 2/O, desorption rates from CH/sub 2/O-exposed gypsum board exhibit a linear dependence with negative slope on CH/sub 2/O vapor concentration. Analogous CH/sub 2/O emissions properties have been observed for pressed-wood products bonded with urea-formaldehyde resins. 17 references, 5 figures.« less

  6. Formaldehyde sorption and desorption characteristics of gypsum wallboard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, T.G.; Hawthorne, A.R.; Thompson, C.V.

    1986-01-01

    The sorption and subsequent desorption of formaldehyde (CH/sub 2/O) vapor from unpainted gypsum wallboard has been investigated in environmental chamber experiments conducted at 23/sup 0/C, 50% relative humidity, an air exchange to board loading ratio of 0.43 m/h, and CH/sub 2/O concentrations ranging from 0 to 0.50 mg/m/sup 3/. Both CH/sub 2/O sorption and desorption processes are described using a three-parameter, single-exponential model with an exponential lifetime of 2.9 +- 0.1 days. The storage capacity of gypsum board for CH/sub 2/O vapor results in a time-dependent buffer to changes in CH/sub 2/O vapor concentration surrounding the board, but appears tomore » cause only a weak, permanent loss mechanism for CH/sub 2/O vapor. Short-term CH/sub 2/O desorption rates from CH/sub 2/O-exposed gypsum board (prior to significant depletion of sorbed CH/sub 2/O) exhibit a linear dependence with negative slope on CH/sub 2/O vapor concentration analogous to CH/sub 2/O emissions from pressed-wood products bonded with urea-formaldehyde resins.« less

  7. Modeling the impact of post-diagnosis behavior change on HIV prevalence in Southern California men who have sex with men (MSM).

    PubMed

    Khanna, Aditya S; Goodreau, Steven M; Gorbach, Pamina M; Daar, Eric; Little, Susan J

    2014-08-01

    Our objective here is to demonstrate the population-level effects of individual-level post-diagnosis behavior change (PDBC) in Southern Californian men who have sex with men (MSM), recently diagnosed with HIV. While PDBC has been empirically documented, the population-level effects of such behavior change are largely unknown. To examine these effects, we develop network models derived from the exponential random graph model family. We parameterize our models using behavioral data from the Southern California Acute Infection and Early Disease Research Program, and biological data from a number of published sources. Our models incorporate vital demographic processes, biology, treatment and behavior. We find that without PDBC, HIV prevalence among MSM would be significantly higher at any reasonable frequency of testing. We also demonstrate that higher levels of HIV risk behavior among HIV-positive men relative to HIV-negative men observed in some cross-sectional studies are consistent with individual-level PDBC.

  8. TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.

    DTIC Science & Technology

    exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization

  9. An explicit asymptotic model for the surface wave in a viscoelastic half-space based on applying Rabotnov's fractional exponential integral operators

    NASA Astrophysics Data System (ADS)

    Wilde, M. V.; Sergeeva, N. V.

    2018-05-01

    An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.

  10. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Exponential stability of impulsive stochastic genetic regulatory networks with time-varying delays and reaction-diffusion

    DOE PAGES

    Cao, Boqiang; Zhang, Qimin; Ye, Ming

    2016-11-29

    We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.

  12. A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.

    DTIC Science & Technology

    1981-06-01

    Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that

  13. Confronting quasi-exponential inflation with WMAP seven

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, Barun Kumar; Pal, Supratik; Basu, B., E-mail: barunp1985@rediffmail.com, E-mail: pal@th.physik.uni-bonn.de, E-mail: banasri@isical.ac.in

    2012-04-01

    We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK.

  14. Cell Size Regulation in Bacteria

    NASA Astrophysics Data System (ADS)

    Amir, Ariel

    2014-05-01

    Various bacteria such as the canonical gram negative Escherichia coli or the well-studied gram positive Bacillus subtilis divide symmetrically after they approximately double their volume. Their size at division is not constant, but is typically distributed over a narrow range. Here, we propose an analytically tractable model for cell size control, and calculate the cell size and interdivision time distributions, as well as the correlations between these variables. We suggest ways of extracting the model parameters from experimental data, and show that existing data for E. coli supports partial size control, and a particular explanation: a cell attempts to add a constant volume from the time of initiation of DNA replication to the next initiation event. This hypothesis accounts for the experimentally observed correlations between mother and daughter cells as well as the exponential dependence of size on growth rate.

  15. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    PubMed

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  16. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  17. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  18. Verification of the exponential model of body temperature decrease after death in pigs.

    PubMed

    Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal

    2005-09-01

    The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.

  19. Effect of multiple perfusion components on pseudo-diffusion coefficient in intravoxel incoherent motion imaging

    NASA Astrophysics Data System (ADS)

    Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min

    2017-11-01

    The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n  =  31), spleens (n  =  31) and kidneys (n  =  31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.

  20. An Empirical Assessment of the Form of Utility Functions

    ERIC Educational Resources Information Center

    Kirby, Kris N.

    2011-01-01

    Utility functions, which relate subjective value to physical attributes of experience, are fundamental to most decision theories. Seven experiments were conducted to test predictions of the most widely assumed mathematical forms of utility (power, log, and negative exponential), and a function proposed by Rachlin (1992). For pairs of gambles for…

  1. Non-Gaussian Analysis of Diffusion Weighted Imaging in Head and Neck at 3T: A Pilot Study in Patients with Nasopharyngeal Carcinoma

    PubMed Central

    Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.

    2014-01-01

    Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318

  2. Bacterial genomes lacking long-range correlations may not be modeled by low-order Markov chains: the role of mixing statistics and frame shift of neighboring genes.

    PubMed

    Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian

    2014-12-01

    We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Fractional Stability of Trunk Acceleration Dynamics of Daily-Life Walking: Toward a Unified Concept of Gait Stability

    PubMed Central

    Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.

    2017-01-01

    Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400

  4. Lambert-Beer law in ocean waters: optical properties of water and of dissolved/suspended material, optical energy budgets.

    PubMed

    Stavn, R H

    1988-01-15

    The role of the Lambert-Beer law in ocean optics is critically examined. The Lambert-Beer law and the three-parameter model of the submarine light field are used to construct an optical energy budget for any hydrosol. It is further applied to the analytical exponential decay coefficient of the light field and used to estimate the optical properties and effects of the dissolved/suspended component in upper ocean layers. The concepts of the empirical exponential decay coefficient (diffuse attenuation coefficient) of the light field and a constant exponential decay coefficient for molecular water are analyzed quantitatively. A constant exponential decay coefficient for water is rejected. The analytical exponential decay coefficient is used to analyze optical gradients in ocean waters.

  5. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  6. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  7. On retrodictions of global mantle flow with assimilated surface velocities

    NASA Astrophysics Data System (ADS)

    Colli, Lorenzo; Bunge, Hans-Peter; Schuberth, Bernhard S. A.

    2016-04-01

    Modeling past states of Earth's mantle and relating them to geologic observations such as continental-scale uplift and subsidence is an effective method for testing mantle convection models. However, mantle convection is chaotic and two identical mantle models initialized with slightly different temperature fields diverge exponentially in time until they become uncorrelated, thus limiting retrodictions (i.e., reconstructions of past states of Earth's mantle obtained using present information) to the recent past. We show with 3-D spherical mantle convection models that retrodictions of mantle flow can be extended significantly if knowledge of the surface velocity field is available. Assimilating surface velocities produces in some cases negative Lyapunov times (i.e., e-folding times), implying that even a severely perturbed initial condition may evolve toward the reference state. A history of the surface velocity field for Earth can be obtained from past plate motion reconstructions for time periods of a mantle overturn, suggesting that mantle flow can be reconstructed over comparable times.

  8. On retrodictions of global mantle flow with assimilated surface velocities

    NASA Astrophysics Data System (ADS)

    Colli, Lorenzo; Bunge, Hans-Peter; Schuberth, Bernhard S. A.

    2015-10-01

    Modeling past states of Earth's mantle and relating them to geologic observations such as continental-scale uplift and subsidence is an effective method for testing mantle convection models. However, mantle convection is chaotic and two identical mantle models initialized with slightly different temperature fields diverge exponentially in time until they become uncorrelated, thus limiting retrodictions (i.e., reconstructions of past states of Earth's mantle obtained using present information) to the recent past. We show with 3-D spherical mantle convection models that retrodictions of mantle flow can be extended significantly if knowledge of the surface velocity field is available. Assimilating surface velocities produces in some cases negative Lyapunov times (i.e., e-folding times), implying that even a severely perturbed initial condition may evolve toward the reference state. A history of the surface velocity field for Earth can be obtained from past plate motion reconstructions for time periods of a mantle overturn, suggesting that mantle flow can be reconstructed over comparable times.

  9. The dual role of friendship and antipathy relations in the marginalization of overweight children in their peer networks: The TRAILS Study.

    PubMed

    de la Haye, Kayla; Dijkstra, Jan Kornelis; Lubbers, Miranda J; van Rijsewijk, Loes; Stolk, Ronald

    2017-01-01

    Weight-based stigma compromises the social networks of overweight children. To date, research on the position of overweight children in their peer network has focused only on friendship relations, and not on negative relationship dimensions. This study examined how overweight was associated with relations of friendship and dislike (antipathies) in the peer group. Exponential random graph models (ERGM) were used to examine friendship and antipathy relations among overweight children and their classmates, using a sub-sample from the TRacking Adolescents' Individual Lives Survey (N = 504, M age 11.4). Findings showed that overweight children were less likely to receive friendship nominations, and were more likely to receive dislike nominations. Overweight children were also more likely than their non-overweight peers to nominate classmates that they disliked. Together, the results indicate that positive and negative peer relations are impacted by children's weight status, and are relevant to addressing the social marginalization of overweight children.

  10. Better Bet-Hedging with coupled positive and negative feedback loops

    NASA Astrophysics Data System (ADS)

    Narula, Jatin; Igoshin, Oleg

    2011-03-01

    Bacteria use the phenotypic heterogeneity associated with bistable switches to distribute the risk of activating stress response strategies like sporulation and persistence. However bistable switches offer little control over the timing of phenotype switching and first passage times (FPT) for individual cells are found to be exponentially distributed. We show that a genetic circuit consisting of interlinked positive and negative feedback loops allows cells to control the timing of phenotypic switching. Using a mathematical model we find that in this system a stable high expression state and stable low expression limit cycle coexist and the FPT distribution for stochastic transitions between them shows multiple peaks at regular intervals. A multimodal FPT distribution allows cells to detect the persistence of stress and control the rate of phenotype transition of the population. We further show that extracellular signals from cell-cell communication that change the strength of the feedback loops can modulate the FPT distribution and allow cells even greater control in a bet-hedging strategy.

  11. Growth phase-dependent induction of stationary-phase promoters of Escherichia coli in different gram-negative bacteria.

    PubMed Central

    Miksch, G; Dobrowolski, P

    1995-01-01

    RSF1010-derived plasmids carrying a fusion of a promoterless lacZ gene with the sigma s-dependent growth phase-regulated promoters of Escherichia coli, bolAp1 and fic, were constructed. The plasmids were mobilized into the gram-negative bacterial species Acetobacter methanolicus, Xanthomonas campestris, Pseudomonas putida, and Rhizobium meliloti. The beta-galactosidase activities of bacterial cultures were determined during exponential and stationary growth phases. Transcriptional activation of the fic promoter in the different bacteria was growth phase dependent as in E. coli and was initiated generally during the transition to stationary phase. The induction of the bolA promoter was also growth phase dependent in the bacteria tested. While the expression in E. coli and R. meliloti was initiated during the transition from exponential to stationary phase, the induction in A. methanolicus, P. putida, and X. campestris started some hours after stationary growth phase was reached. In all the species tested, DNA fragments hybridizing with the rpoS gene of E. coli were detected. The results show that in different gram-negative bacteria, stationary-phase-specific sigma factors which are structurally and functionally homologous to sigma s and are able to recognize the promoter sequences of both bolA and fic exist. PMID:7665531

  12. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.

    PubMed

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2013-10-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  13. Fracture analysis of a central crack in a long cylindrical superconductor with exponential model

    NASA Astrophysics Data System (ADS)

    Zhao, Yu Feng; Xu, Chi

    2018-05-01

    The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.

  14. Abnormal positive bias stress instability of In–Ga–Zn–O thin-film transistors with low-temperature Al{sub 2}O{sub 3} gate dielectric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yu-Hong; Yu, Ming-Jiue; Lin, Ruei-Ping

    2016-01-18

    Low-temperature atomic layer deposition (ALD) was employed to deposit Al{sub 2}O{sub 3} as a gate dielectric in amorphous In–Ga–Zn–O thin-film transistors fabricated at temperatures below 120 °C. The devices exhibited a negligible threshold voltage shift (ΔV{sub T}) during negative bias stress, but a more pronounced ΔV{sub T} under positive bias stress with a characteristic turnaround behavior from a positive ΔV{sub T} to a negative ΔV{sub T}. This abnormal positive bias instability is explained using a two-process model, including both electron trapping and hydrogen release and migration. Electron trapping induces the initial positive ΔV{sub T}, which can be fitted using the stretchedmore » exponential function. The breakage of residual AlO-H bonds in low-temperature ALD Al{sub 2}O{sub 3} is triggered by the energetic channel electrons. The hydrogen atoms then diffuse toward the In–Ga–Zn–O channel and induce the negative ΔV{sub T} through electron doping with power-law time dependence. A rapid partial recovery of the negative ΔV{sub T} after stress is also observed during relaxation.« less

  15. Multifactor analysis and simulation of the surface runoff and soil infiltration at different slope gradients

    NASA Astrophysics Data System (ADS)

    Huang, J.; Kang, Q.; Yang, J. X.; Jin, P. W.

    2017-08-01

    The surface runoff and soil infiltration exert significant influence on soil erosion. The effects of slope gradient/length (SG/SL), individual rainfall amount/intensity (IRA/IRI), vegetation cover (VC) and antecedent soil moisture (ASM) on the runoff depth (RD) and soil infiltration (INF) were evaluated in a series of natural rainfall experiments in the South of China. RD is found to correlate positively with IRA, IRI, and ASM factors and negatively with SG and VC. RD decreased followed by its increase with SG and ASM, it increased with a further decrease with SL, exhibited a linear growth with IRA and IRI, and exponential drop with VC. Meanwhile, INF exhibits a positive correlation with SL, IRA and IRI and VC, and a negative one with SG and ASM. INF was going up and then down with SG, linearly rising with SL, IRA and IRI, increasing by a logit function with VC, and linearly falling with ASM. The VC level above 60% can effectively lower the surface runoff and significantly enhance soil infiltration. Two RD and INF prediction models, accounting for the above six factors, were constructed using the multiple nonlinear regression method. The verification of those models disclosed a high Nash-Sutcliffe coefficient and low root-mean-square error, demonstrating good predictability of both models.

  16. Chaotic dynamics and diffusion in a piecewise linear equation

    NASA Astrophysics Data System (ADS)

    Shahrear, Pabel; Glass, Leon; Edwards, Rod

    2015-03-01

    Genetic interactions are often modeled by logical networks in which time is discrete and all gene activity states update simultaneously. However, there is no synchronizing clock in organisms. An alternative model assumes that the logical network is preserved and plays a key role in driving the dynamics in piecewise nonlinear differential equations. We examine dynamics in a particular 4-dimensional equation of this class. In the equation, two of the variables form a negative feedback loop that drives a second negative feedback loop. By modifying the original equations by eliminating exponential decay, we generate a modified system that is amenable to detailed analysis. In the modified system, we can determine in detail the Poincaré (return) map on a cross section to the flow. By analyzing the eigenvalues of the map for the different trajectories, we are able to show that except for a set of measure 0, the flow must necessarily have an eigenvalue greater than 1 and hence there is sensitive dependence on initial conditions. Further, there is an irregular oscillation whose amplitude is described by a diffusive process that is well-modeled by the Irwin-Hall distribution. There is a large class of other piecewise-linear networks that might be analyzed using similar methods. The analysis gives insight into possible origins of chaotic dynamics in periodically forced dynamical systems.

  17. Optimality of cycle time and inventory decisions in a two echelon inventory system with exponential price dependent demand under credit period

    NASA Astrophysics Data System (ADS)

    Krugon, Seelam; Nagaraju, Dega

    2017-05-01

    This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.

  18. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  19. On stable exponential cosmological solutions with non-static volume factor in the Einstein-Gauss-Bonnet model

    NASA Astrophysics Data System (ADS)

    Ivashchuk, V. D.; Ernazarov, K. K.

    2017-01-01

    A (n + 1)-dimensional gravitational model with cosmological constant and Gauss-Bonnet term is studied. The ansatz with diagonal cosmological metrics is adopted and solutions with exponential dependence of scale factors: ai ˜ exp (vit), i = 1, …, n, are considered. The stability analysis of the solutions with non-static volume factor is presented. We show that the solutions with v 1 = v 2 = v 3 = H > 0 and small enough variation of the effective gravitational constant G are stable if certain restriction on (vi ) is obeyed. New examples of stable exponential solutions with zero variation of G in dimensions D = 1 + m + 2 with m > 2 are presented.

  20. The Secular Evolution Of Disc Galaxies And The Origin Of Exponential And Double Exponential Surface Density Profiles

    NASA Astrophysics Data System (ADS)

    Elmegreen, Bruce G.

    2016-10-01

    Exponential radial profiles are ubiquitous in spiral and dwarf Irregular galaxies, but the origin of this structural form is not understood. This talk will review the observations of exponential and double exponential disks, considering both the light and the mass profiles, and the contributions from stars and gas. Several theories for this structure will also be reviewed, including primordial collapse, bar and spiral torques, clump torques, galaxy interactions, disk viscosity and other internal processes of angular momentum exchange, and stellar scattering off of clumpy structure. The only process currently known that can account for this structure in the most theoretically difficult case is stellar scattering off disks clumps. Stellar orbit models suggest that such scattering can produce exponentials even in isolated dwarf irregulars that have no bars or spirals, little shear or viscosity, and profiles that go out too far for the classical Mestel case of primordial collapse with specific angular momentum conservation.

  1. An exactly solvable, spatial model of mutation accumulation in cancer

    NASA Astrophysics Data System (ADS)

    Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej

    2016-12-01

    One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.

  2. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  3. The γ parameter of the stretched-exponential model is influenced by internal gradients: validation in phantoms.

    PubMed

    Palombo, Marco; Gabrielli, Andrea; De Santis, Silvia; Capuani, Silvia

    2012-03-01

    In this paper, we investigate the image contrast that characterizes anomalous and non-gaussian diffusion images obtained using the stretched exponential model. This model is based on the introduction of the γ stretched parameter, which quantifies deviation from the mono-exponential decay of diffusion signal as a function of the b-value. To date, the biophysical substrate underpinning the contrast observed in γ maps, in other words, the biophysical interpretation of the γ parameter (or the fractional order derivative in space, β parameter) is still not fully understood, although it has already been applied to investigate both animal models and human brain. Due to the ability of γ maps to reflect additional microstructural information which cannot be obtained using diffusion procedures based on gaussian diffusion, some authors propose this parameter as a measure of diffusion heterogeneity or water compartmentalization in biological tissues. Based on our recent work we suggest here that the coupling between internal and diffusion gradients provide pseudo-superdiffusion effects which are quantified by the stretching exponential parameter γ. This means that the image contrast of Mγ maps reflects local magnetic susceptibility differences (Δχ(m)), thus highlighting better than T(2)(∗) contrast the interface between compartments characterized by Δχ(m). Thanks to this characteristic, Mγ imaging may represent an interesting tool to develop contrast-enhanced MRI for molecular imaging. The spectroscopic and imaging experiments (performed in controlled micro-beads dispersion) that are reported here, strongly suggest internal gradients, and as a consequence Δχ(m), to be an important factor in fully understanding the source of contrast in anomalous diffusion methods that are based on a stretched exponential model analysis of diffusion data obtained at varying gradient strengths g. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Demonstration of fundamental statistics by studying timing of electronics signals in a physics-based laboratory

    NASA Astrophysics Data System (ADS)

    Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.

    2017-07-01

    We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.

  5. Surface topography due to convection in a variable viscosity fluid - Application to short wavelength gravity anomalies in the central Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Lin, J.; Parmentier, E. M.

    1985-01-01

    Finite difference calculations of thermal convection in a fluid layer with a viscosity exponentially decreasing with temperature are performed in the context of examining the topography and gravity anomalies due to mantle convection. The surface topography and gravity anomalies are shown to be positive over regions of ascending flow and negative over regions of descending flow; at large Rayleigh numbers the amplitude of surface topography is inferred to depend on Rayleigh number to the power of 7/9. Compositional stratifications of the mantle is proposed as a mechanism for confining small-scale convection to a thin layer. A comparative analysis of the results with other available models is included.

  6. Utilizing Learners' Negative Ratings in Semantic Content-Based Recommender System for e-Learning Forum

    ERIC Educational Resources Information Center

    Albatayneh, Naji Ahmad; Ghauth, Khairil Imran; Chua, Fang-Fang

    2018-01-01

    Nowadays, most of e-learning systems embody online discussion forums as a medium for collaborative learning that supports knowledge sharing and information exchanging between learners. The exponential growth of the available shared information in e-learning online discussion forums has caused a difficulty for learners in discovering interesting…

  7. Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).

    PubMed

    Namiki, C; Katsuragawa, M; Zani-Teixeira, M L

    2015-04-01

    The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.

  8. 1/f oscillations in a model of moth populations oriented by diffusive pheromones

    NASA Astrophysics Data System (ADS)

    Barbosa, L. A.; Martins, M. L.; Lima, E. R.

    2005-01-01

    An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.

  9. Performance of time-series methods in forecasting the demand for red blood cell transfusion.

    PubMed

    Pereira, Arturo

    2004-05-01

    Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.

  10. The Use of Modeling Approach for Teaching Exponential Functions

    NASA Astrophysics Data System (ADS)

    Nunes, L. F.; Prates, D. B.; da Silva, J. M.

    2017-12-01

    This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aliev, Alikram N., E-mail: alikram.n.aliev@gmail.com

    We examine the black hole bomb model which consists of a rotating black hole of five-dimenensional minimal ungauged supergravity and a reflecting mirror around it. For low-frequency scalar perturbations, we find solutions to the Klein-Gordon equation in the near-horizon and far regions of the black hole spacetime. To avoid solutions with logarithmic terms, we assume that the orbital quantum number l takes on nearly, but not exactly, integer values and perform the matching of these solutions in an intermediate region. This allows us to calculate analytically the frequency spectrum of quasinormal modes, taking the limits as l approaches even ormore » odd integers separately. We find that all l modes of scalar perturbations undergo negative damping in the regime of superradiance, resulting in exponential growth of their amplitudes. Thus, the model under consideration would exhibit the superradiant instability, eventually behaving as a black hole bomb in five dimensions.« less

  12. Crowding Effects in Vehicular Traffic

    PubMed Central

    Combinido, Jay Samuel L.; Lim, May T.

    2012-01-01

    While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762

  13. Emotional persistence in online chatting communities

    NASA Astrophysics Data System (ADS)

    Garas, Antonios; Garcia, David; Skowron, Marcin; Schweitzer, Frank

    2012-05-01

    How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional ``tone'' of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication.

  14. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  15. Emotional persistence in online chatting communities

    PubMed Central

    Garas, Antonios; Garcia, David; Skowron, Marcin; Schweitzer, Frank

    2012-01-01

    How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional “tone” of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication. PMID:22577512

  16. SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X; Duan, J; Popple, R

    2014-06-01

    Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less

  17. Event-driven simulations of nonlinear integrate-and-fire neurons.

    PubMed

    Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique

    2007-12-01

    Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.

  18. A non-Gaussian option pricing model based on Kaniadakis exponential deformation

    NASA Astrophysics Data System (ADS)

    Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara

    2017-09-01

    A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.

  19. An exponential model equation for thiamin loss in irradiated ground pork as a function of dose and temperature of irradiation

    NASA Astrophysics Data System (ADS)

    Fox, J. B.; Thayer, D. W.; Phillips, J. G.

    The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.

  20. Decaying two-dimensional turbulence in a circular container.

    PubMed

    Schneider, Kai; Farge, Marie

    2005-12-09

    We present direct numerical simulations of two-dimensional decaying turbulence at initial Reynolds number 5 x 10(4) in a circular container with no-slip boundary conditions. Starting with random initial conditions the flow rapidly exhibits self-organization into coherent vortices. We study their formation and the role of the viscous boundary layer on the production and decay of integral quantities. The no-slip wall produces vortices which are injected into the bulk flow and tend to compensate the enstrophy dissipation. The self-organization of the flow is reflected by the transition of the initially Gaussian vorticity probability density function (PDF) towards a distribution with exponential tails. Because of the presence of coherent vortices the pressure PDF become strongly skewed with exponential tails for negative values.

  1. Structural and optical properties of Dy3+ doped Aluminofluoroborophosphate glasses for white light applications

    NASA Astrophysics Data System (ADS)

    Vijayakumar, M.; Mahesvaran, K.; Patel, Dinesh K.; Arunkumar, S.; Marimuthu, K.

    2014-11-01

    Dy3+ doped Aluminofluoroborophosphate glasses (BPAxD) have been prepared following conventional melt quenching technique and their structural and optical properties were explored through XRD, FTIR, optical absorption, excitation, emission and decay measurements. The coexistence of BO3 groups in borate rich domain and BO4 groups in phosphate rich domain have been confirmed through vibrational energy analysis. Negative bonding parameter (δ) values indicate that, the metal-ligand environment in the prepared glasses is of ionic in nature. The oscillator strength and the luminescent intensity Ωλ (λ = 2, 4 and 6) parameters are calculated using Judd-Ofelt theory. The radiative properties such as transition probability (A), stimulated emission cross-section (σpE) and branching ratios (β) have been calculated using JO intensity parameters and compared with the reported Dy3+ doped glasses. Concentration effect on Y/B intensity ratios and the CIE chromaticity coordinates were calculated for the generation of white light from the luminescence spectra. The color purity and the correlated color temperature were also calculated and the results are discussed in the present work. The decay of the 4F9/2 excited level is found to be single exponential for lower concentration and become non-exponential for higher concentration. The non-exponential behavior arises due to the efficient energy transfer between the Dy3+ ions through various non-radiative relaxation channels and the decay of the 4F9/2 excited level have been analyzed with IH model. Among the prepared glasses, BPA0.5D glass exhibits higher σpE, βR, σpE×σpE, σpE×Δλeff and η values for the 6H13/2 emission band which in turn specifies its suitability for white LEDs, laser applications and optical amplifiers.

  2. Estimation of renal allograft half-life: fact or fiction?

    PubMed

    Azancot, M Antonieta; Cantarell, Carme; Perelló, Manel; Torres, Irina B; Serón, Daniel; Seron, Daniel; Moreso, Francesc; Arias, Manuel; Campistol, Josep M; Curto, Jordi; Hernandez, Domingo; Morales, José M; Sanchez-Fructuoso, Ana; Abraira, Victor

    2011-09-01

    Renal allograft half-life time (t½) is the most straightforward representation of long-term graft survival. Since some statistical models overestimate this parameter, we compare different approaches to evaluate t½. Patients with a 1-year functioning graft transplanted in Spain during 1990, 1994, 1998 and 2002 were included. Exponential, Weibull, gamma, lognormal and log-logistic models censoring the last year of follow-up were evaluated. The goodness of fit of these models was evaluated according to the Cox-Snell residuals and the Akaike's information criterion (AIC) was employed to compare these models. We included 4842 patients. Real t½ in 1990 was 14.2 years. Median t½ (95% confidence interval) in 1990 and 2002 was 15.8 (14.2-17.5) versus 52.6 (35.6-69.5) according to the exponential model (P < 0.001). No differences between 1990 and 2002 were observed when t½ was estimated with the other models. In 1990 and 2002, t½ was 14.0 (13.1-15.0) versus 18.0 (13.7-22.4) according to Weibull, 15.5 (13.9-17.1) versus 19.1 (15.6-22.6) according to gamma, 14.4 (13.3-15.6) versus 18.3 (14.2-22.3) according to the log-logistic and 15.2 (13.8-16.6) versus 18.8 (15.3-22.3) according to the lognormal models. The AIC confirmed that the exponential model had the lowest goodness of fit, while the other models yielded a similar result. The exponential model overestimates t½, especially in cohorts of patients with a short follow-up, while any of the other studied models allow a better estimation even in cohorts with short follow-up.

  3. Intrinsic Work Value-Reward Dissonance and Work Satisfaction during Young Adulthood

    PubMed Central

    Porfeli, Erik J.; Mortimer, Jeylan T.

    2010-01-01

    Previous research suggests that discrepancies between work values and rewards are indicators of dissonance that induce change in both to reduce such dissonance over time. The present study elaborates this model to suggest parallels with the first phase of the extension- and-strain curve. Small discrepancies or small increases in extension are presumed to be almost unnoticeable, while increasingly large discrepancies are thought to yield exponentially increasing strain. Work satisfaction is a principal outcome of dissonance; hence, work value-reward discrepancies are predicted to diminish work satisfaction in an exponential fashion. Findings from the work and family literature, however, lead to the prediction that this curvilinear association will be moderated by gender and family roles. Using longitudinal data spanning the third decade of life, the results suggest that intrinsic work value-reward discrepancies, as predicted, are increasingly associated, in a negative curvilinear fashion, with work satisfaction. This pattern, however, differs as a function of gender and family roles. Females who established family roles exhibited the expected pattern while other gender by family status groups did not. The results suggest that gender and family roles moderate the association between intrinsic work value-reward dissonance and satisfaction. In addition, women who remained unmarried and childless exhibited the strongest associations between occupational rewards and satisfaction. PMID:20526434

  4. Intrinsic Work Value-Reward Dissonance and Work Satisfaction during Young Adulthood.

    PubMed

    Porfeli, Erik J; Mortimer, Jeylan T

    2010-06-01

    Previous research suggests that discrepancies between work values and rewards are indicators of dissonance that induce change in both to reduce such dissonance over time. The present study elaborates this model to suggest parallels with the first phase of the extension- and-strain curve. Small discrepancies or small increases in extension are presumed to be almost unnoticeable, while increasingly large discrepancies are thought to yield exponentially increasing strain. Work satisfaction is a principal outcome of dissonance; hence, work value-reward discrepancies are predicted to diminish work satisfaction in an exponential fashion. Findings from the work and family literature, however, lead to the prediction that this curvilinear association will be moderated by gender and family roles. Using longitudinal data spanning the third decade of life, the results suggest that intrinsic work value-reward discrepancies, as predicted, are increasingly associated, in a negative curvilinear fashion, with work satisfaction. This pattern, however, differs as a function of gender and family roles. Females who established family roles exhibited the expected pattern while other gender by family status groups did not. The results suggest that gender and family roles moderate the association between intrinsic work value-reward dissonance and satisfaction. In addition, women who remained unmarried and childless exhibited the strongest associations between occupational rewards and satisfaction.

  5. Modeling the Role of Dislocation Substructure During Class M and Exponential Creep. Revised

    NASA Technical Reports Server (NTRS)

    Raj, S. V.; Iskovitz, Ilana Seiden; Freed, A. D.

    1995-01-01

    The different substructures that form in the power-law and exponential creep regimes for single phase crystalline materials under various conditions of stress, temperature and strain are reviewed. The microstructure is correlated both qualitatively and quantitatively with power-law and exponential creep as well as with steady state and non-steady state deformation behavior. These observations suggest that creep is influenced by a complex interaction between several elements of the microstructure, such as dislocations, cells and subgrains. The stability of the creep substructure is examined in both of these creep regimes during stress and temperature change experiments. These observations are rationalized on the basis of a phenomenological model, where normal primary creep is interpreted as a series of constant structure exponential creep rate-stress relationships. The implications of this viewpoint on the magnitude of the stress exponent and steady state behavior are discussed. A theory is developed to predict the macroscopic creep behavior of a single phase material using quantitative microstructural data. In this technique the thermally activated deformation mechanisms proposed by dislocation physics are interlinked with a previously developed multiphase, three-dimensional. dislocation substructure creep model. This procedure leads to several coupled differential equations interrelating macroscopic creep plasticity with microstructural evolution.

  6. Diffusion-weighted MR imaging of pancreatic cancer: A comparison of mono-exponential, bi-exponential and non-Gaussian kurtosis models.

    PubMed

    Kartalis, Nikolaos; Manikis, Georgios C; Loizou, Louiza; Albiin, Nils; Zöllner, Frank G; Del Chiaro, Marco; Marias, Kostas; Papanikolaou, Nikolaos

    2016-01-01

    To compare two Gaussian diffusion-weighted MRI (DWI) models including mono-exponential and bi-exponential, with the non-Gaussian kurtosis model in patients with pancreatic ductal adenocarcinoma. After written informed consent, 15 consecutive patients with pancreatic ductal adenocarcinoma underwent free-breathing DWI (1.5T, b-values: 0, 50, 150, 200, 300, 600 and 1000 s/mm 2 ). Mean values of DWI-derived metrics ADC, D, D*, f, K and D K were calculated from multiple regions of interest in all tumours and non-tumorous parenchyma and compared. Area under the curve was determined for all metrics. Mean ADC and D K showed significant differences between tumours and non-tumorous parenchyma (both P  < 0.001). Area under the curve for ADC, D, D*, f, K, and D K were 0.77, 0.52, 0.53, 0.62, 0.42, and 0.84, respectively. ADC and D K could differentiate tumours from non-tumorous parenchyma with the latter showing a higher diagnostic accuracy. Correction for kurtosis effects has the potential to increase the diagnostic accuracy of DWI in patients with pancreatic ductal adenocarcinoma.

  7. Bi-periodicity evoked by periodic external inputs in delayed Cohen-Grossberg-type bidirectional associative memory networks

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Wang, Yanyan

    2010-05-01

    In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.

  8. Global exponential stability of bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Song, Qiankun; Cao, Jinde

    2007-05-01

    A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.

  9. Analysis of volumetric response of pituitary adenomas receiving adjuvant CyberKnife stereotactic radiosurgery with the application of an exponential fitting model.

    PubMed

    Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan

    2017-01-01

    Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.

  10. Optimal pricing and replenishment policies for instantaneous deteriorating items with backlogging and trade credit under inflation

    NASA Astrophysics Data System (ADS)

    Sundara Rajan, R.; Uthayakumar, R.

    2017-12-01

    In this paper we develop an economic order quantity model to investigate the optimal replenishment policies for instantaneous deteriorating items under inflation and trade credit. Demand rate is a linear function of selling price and decreases negative exponentially with time over a finite planning horizon. Shortages are allowed and partially backlogged. Under these conditions, we model the retailer's inventory system as a profit maximization problem to determine the optimal selling price, optimal order quantity and optimal replenishment time. An easy-to-use algorithm is developed to determine the optimal replenishment policies for the retailer. We also provide optimal present value of profit when shortages are completely backlogged as a special case. Numerical examples are presented to illustrate the algorithm provided to obtain optimal profit. And we also obtain managerial implications from numerical examples to substantiate our model. The results show that there is an improvement in total profit from complete backlogging rather than the items being partially backlogged.

  11. A dynamic evolution model of human opinion as affected by advertising

    NASA Astrophysics Data System (ADS)

    Luo, Gui-Xun; Liu, Yun; Zeng, Qing-An; Diao, Su-Meng; Xiong, Fei

    2014-11-01

    We propose a new model to investigate the dynamics of human opinion as affected by advertising, based on the main idea of the CODA model and taking into account two practical factors: one is that the marginal influence of an additional friend will decrease with an increasing number of friends; the other is the decline of memory over time. Simulations show several significant conclusions for both advertising agencies and the general public. A small difference of advertising’s influence on individuals or advertising coverage will result in significantly different advertising effectiveness within a certain interval of value. Compared to the value of advertising’s influence on individuals, the advertising coverage plays a more important role due to the exponential decay of memory. Meanwhile, some of the obtained results are in accordance with people’s daily cognition about advertising. The real key factor in determining the success of advertising is the intensity of exchanging opinions, and people’s external actions always follow their internal opinions. Negative opinions also play an important role.

  12. Kinetics of alkali-based photocathode degradation

    DOE PAGES

    Pavlenko, Vitaly; Liu, Fangze; Hoffbauer, Mark A.; ...

    2016-11-02

    Here, we report on a kinetic model that describes the degradation of the quantum efficiency (QE) of Cs 3Sb and negative electron affinity (NEA) GaAs photocathodes under UHV conditions. Additionally, the generally accepted irreversible chemical change of a photocathode’s surface due to reactions with residual gases, such as O 2, CO 2, and H 2O, the model incorporates an intermediate reversible physisorption step, similar to Langmuir adsorption. Moreover, this intermediate step is needed to satisfactorily describe the strongly non-exponential QE degradation curves for two distinctly different classes of photocathodes –surface-activated and “bulk,” indicating that in both systems the QE degradationmore » results from surface damage. The recovery of the QE upon improvement of vacuum conditions is also accurately predicted by this model with three parameters (rates of gas adsorption, desorption, and irreversible chemical reaction with the surface) comprising metrics to better characterize the lifetime of the cathodes, instead of time-pressure exposure expressed in Langmuir units.« less

  13. Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warshaw, S I

    2001-07-15

    In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less

  14. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  15. A comparison of modelling techniques used to characterise oxygen uptake kinetics during the on-transient of exercise.

    PubMed

    Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A

    2001-09-01

    We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.

  16. A study of physician collaborations through social network and exponential random graph

    PubMed Central

    2013-01-01

    Background Physician collaboration, which evolves among physicians during the course of providing healthcare services to hospitalised patients, has been seen crucial to effective patient outcomes in healthcare organisations and hospitals. This study aims to explore physician collaborations using measures of social network analysis (SNA) and exponential random graph (ERG) model. Methods Based on the underlying assumption that collaborations evolve among physicians when they visit a common hospitalised patient, this study first proposes an approach to map collaboration network among physicians from the details of their visits to patients. This paper terms this network as physician collaboration network (PCN). Second, SNA measures of degree centralisation, betweenness centralisation and density are used to examine the impact of SNA measures on hospitalisation cost and readmission rate. As a control variable, the impact of patient age on the relation between network measures (i.e. degree centralisation, betweenness centralisation and density) and hospital outcome variables (i.e. hospitalisation cost and readmission rate) are also explored. Finally, ERG models are developed to identify micro-level structural properties of (i) high-cost versus low-cost PCN; and (ii) high-readmission rate versus low-readmission rate PCN. An electronic health insurance claim dataset of a very large Australian health insurance organisation is utilised to construct and explore PCN in this study. Results It is revealed that the density of PCN is positively correlated with hospitalisation cost and readmission rate. In contrast, betweenness centralisation is found negatively correlated with hospitalisation cost and readmission rate. Degree centralisation shows a negative correlation with readmission rate, but does not show any correlation with hospitalisation cost. Patient age does not have any impact for the relation of SNA measures with hospitalisation cost and hospital readmission rate. The 2-star parameter of ERG model has significant impact on hospitalisation cost. Furthermore, it is found that alternative-k-star and alternative-k-two-path parameters of ERG model have impact on readmission rate. Conclusions Collaboration structures among physicians affect hospitalisation cost and hospital readmission rate. The implications of the findings of this study in terms of their potentiality in developing guidelines to improve the performance of collaborative environments among healthcare professionals within healthcare organisations are discussed in this paper. PMID:23803165

  17. A model of canopy photosynthesis incorporating protein distribution through the canopy and its acclimation to light, temperature and CO2

    PubMed Central

    Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce

    2010-01-01

    Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273

  18. Paving the Road to Success: A Framework for Implementing the "Success Tutoring" Approach

    ERIC Educational Resources Information Center

    Spark, Linda; De Klerk, Danie; Maleswena, Tshepiso; Jones, Andrew

    2017-01-01

    The exponential growth of higher education enrolment in South Africa has resulted in increased diversity of the student body, leading to a proliferation of factors that affect student performance and success. Various initiatives have been adopted by tertiary institutions to mitigate the negative impact these factors may have on student success,…

  19. Decomposition rates for hand-piled fuels

    Treesearch

    Clinton S. Wright; Alexander M. Evans; Joseph C. Restaino

    2017-01-01

    Hand-constructed piles in eastern Washington and north-central New Mexico were weighed periodically between October 2011 and June 2015 to develop decay-rate constants that are useful for estimating the rate of piled biomass loss over time. Decay-rate constants (k) were determined by fitting negative exponential curves to time series of pile weight for each site. Piles...

  20. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  1. Exponential inflation with F (R ) gravity

    NASA Astrophysics Data System (ADS)

    Oikonomou, V. K.

    2018-03-01

    In this paper, we shall consider an exponential inflationary model in the context of vacuum F (R ) gravity. By using well-known reconstruction techniques, we shall investigate which F (R ) gravity can realize the exponential inflation scenario at leading order in terms of the scalar curvature, and we shall calculate the slow-roll indices and the corresponding observational indices, in the context of slow-roll inflation. We also provide some general formulas of the slow-roll and the corresponding observational indices in terms of the e -foldings number. In addition, for the calculation of the slow-roll and of the observational indices, we shall consider quite general formulas, for which it is not necessary for the assumption that all the slow-roll indices are much smaller than unity to hold true. Finally, we investigate the phenomenological viability of the model by comparing it with the latest Planck and BICEP2/Keck-Array observational data. As we demonstrate, the model is compatible with the current observational data for a wide range of the free parameters of the model.

  2. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  3. Hypersurface Homogeneous Cosmological Model in Modified Theory of Gravitation

    NASA Astrophysics Data System (ADS)

    Katore, S. D.; Hatkar, S. P.; Baxi, R. J.

    2016-12-01

    We study a hypersurface homogeneous space-time in the framework of the f (R, T) theory of gravitation in the presence of a perfect fluid. Exact solutions of field equations are obtained for exponential and power law volumetric expansions. We also solve the field equations by assuming the proportionality relation between the shear scalar (σ ) and the expansion scalar (θ ). It is observed that in the exponential model, the universe approaches isotropy at large time (late universe). The investigated model is notably accelerating and expanding. The physical and geometrical properties of the investigated model are also discussed.

  4. Performance and state-space analyses of systems using Petri nets

    NASA Technical Reports Server (NTRS)

    Watson, James Francis, III

    1992-01-01

    The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.

  5. Model evaluation of plant metal content and biomass yield for the phytoextraction of heavy metals by switchgrass.

    PubMed

    Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei

    2012-06-01

    To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Single-arm phase II trial design under parametric cure models.

    PubMed

    Wu, Jianrong

    2015-01-01

    The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models

    PubMed Central

    Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.

    2017-01-01

    Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161

  8. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models.

    PubMed

    Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos

    2017-01-01

    The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.

  9. Hydrogeomorphic and ecological control on carbonate dissolution in a patterned landscape in South Florida

    NASA Astrophysics Data System (ADS)

    Dong, X.; Heffernan, J. B.; Murray, A. B.; Cohen, M. J.; Martin, J. B.

    2016-12-01

    The evolution of the critical zone both shapes and reflects hydrologic, geochemical, and ecological processes. These interactions are poorly understood in karst landscapes with highly soluble bedrock. In this study, we used the regular-dispersed wetland basins of Big Cypress National Preserve in Florida as a focal case to model the hydrologic, geochemical, and biological mechanisms that affect soil development in karst landscapes. We addressed two questions: (1) What is the minimum timescale for wetland basin development, and (2) do changes in soil depth feed back on dissolution processes and if so by what mechanism? We developed an atmosphere-water-soil model with coupled water-solute transport, incorporating major ion equilibria and kinetic non-equilibrium chemistry, and biogenic acid production via roots distributed through the soil horizon. Under current Florida climate, weathering to a depth of 2 m (a typical depth of wetland basins) would take 4000 6000 yrs, suggesting that landscape pattern could have origins as recent as the most recent stabilization of sea level. Our model further illustrates that interactions between ecological and hydrologic processes influence the rate and depth-dependence of weathering. Absent inundation, dissolution rate decreased exponentially with distance from the bedrock to groundwater table. Inundation generally increased bedrock dissolution, but surface water chemistry and residence time produced complex and non-linear effects on dissolution rate. Biogenic acidity accelerated the dissolution rate by 50 and 1,000 times in inundated and exposed soils. Phase portrait analysis indicated that exponential decreases in bedrock dissolution rate with soil depth could produce stable basin depths. Negative feedback between hydro-period and total basin volume could stabilize the basin radius, but the lesser strength of this mechanism may explain the coalescence of wetland basins observed in some parts of the Big Cypress Landscape.

  10. Observational constraints on varying neutrino-mass cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.

    We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.

  11. Modeling the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity values.

    PubMed

    Muñoz-Cuevas, Marina; Fernández, Pablo S; George, Susan; Pin, Carmen

    2010-05-01

    The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (a(w)) values. To model the duration of the lag phase, the dependence of the parameter h(0), which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or a(w) were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase.

  12. A generalized exponential link function to map a conflict indicator into severity index within safety continuum framework.

    PubMed

    Zheng, Lai; Ismail, Karim

    2017-05-01

    Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Scalar field and time varying cosmological constant in f(R,T) gravity for Bianchi type-I universe

    NASA Astrophysics Data System (ADS)

    Singh, G. P.; Bishi, Binaya K.; Sahoo, P. K.

    2016-04-01

    In this article, we have analysed the behaviour of scalar field and cosmological constant in $f(R,T)$ theory of gravity. Here, we have considered the simplest form of $f(R,T)$ i.e. $f(R,T)=R+2f(T)$, where $R$ is the Ricci scalar and $T$ is the trace of the energy momentum tensor and explored the spatially homogeneous and anisotropic Locally Rotationally Symmetric (LRS) Bianchi type-I cosmological model. It is assumed that the Universe is filled with two non-interacting matter sources namely scalar field (normal or phantom) with scalar potential and matter contribution due to $f(R,T)$ action. We have discussed two cosmological models according to power law and exponential law of the volume expansion along with constant and exponential scalar potential as sub models. Power law models are compatible with normal (quintessence) and phantom scalar field whereas exponential volume expansion models are compatible with only normal (quintessence) scalar field. The values of cosmological constant in our models are in agreement with the observational results. Finally, we have discussed some physical and kinematical properties of both the models.

  14. Remarks on the general solution for the flat Friedmann universe with exponential scalar-field potential and dust

    NASA Astrophysics Data System (ADS)

    Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.

    2012-11-01

    We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.

  15. Looking for Connections between Linear and Exponential Functions

    ERIC Educational Resources Information Center

    Lo, Jane-Jane; Kratky, James L.

    2012-01-01

    Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…

  16. A Parametric Model for Barred Equilibrium Beach Profiles

    DTIC Science & Technology

    2014-05-10

    to shallow water. Bodge (1992) and Komar and McDougal (1994) suggested an exponential form as a preferred solution that exhibited finite slope at the...applications. J. Coast. Res. 7, 53–84. Komar, P.D., McDougal ,W.G., 1994. The analysis of beach profiles and nearshore processes using the exponential beach

  17. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  18. Local perturbations perturb—exponentially-locally

    NASA Astrophysics Data System (ADS)

    De Roeck, W.; Schütz, M.

    2015-06-01

    We elaborate on the principle that for gapped quantum spin systems with local interaction, "local perturbations [in the Hamiltonian] perturb locally [the groundstate]." This principle was established by Bachmann et al. [Commun. Math. Phys. 309, 835-871 (2012)], relying on the "spectral flow technique" or "quasi-adiabatic continuation" [M. B. Hastings, Phys. Rev. B 69, 104431 (2004)] to obtain locality estimates with sub-exponential decay in the distance to the spatial support of the perturbation. We use ideas of Hamza et al. [J. Math. Phys. 50, 095213 (2009)] to obtain similarly a transformation between gapped eigenvectors and their perturbations that is local with exponential decay. This allows to improve locality bounds on the effect of perturbations on the low lying states in certain gapped models with a unique "bulk ground state" or "topological quantum order." We also give some estimate on the exponential decay of correlations in models with impurities where some relevant correlations decay faster than one would naively infer from the global gap of the system, as one also expects in disordered systems with a localized groundstate.

  19. A mutant (‘lab strain’) of the hyperthermophilic archaeon Pyrococcus furiosus, lacking flagella, has unusual growth physiology

    DOE PAGES

    Lewis, Derrick L.; Notey, Jaspreet S.; Chandrayan, Sanjeev K.; ...

    2014-12-04

    In this paper, a mutant (‘lab strain’) of the hyperthermophilic archaeon Pyrococcus furiosus DSM3638 exhibited an extended exponential phase and atypical cell aggregation behavior. Genomic DNA from the mutant culture was sequenced and compared to wild-type (WT) DSM3638, revealing 145 genes with one or more insertions, deletions, or substitutions (12 silent, 33 amino acid substitutions, and 100 frame shifts). Approximately, half of the mutated genes were transposases or hypothetical proteins. The WT transcriptome revealed numerous changes in amino acid and pyrimidine biosynthesis pathways coincidental with growth phase transitions, unlike the mutant whose transcriptome reflected the observed prolonged exponential phase. Targetedmore » gene deletions, based on frame-shifted ORFs in the mutant genome, in a genetically tractable strain of P. furiosus (COM1) could not generate the extended exponential phase behavior observed for the mutant. For example, a putative radical SAM family protein (PF2064) was the most highly up-regulated ORF (>25-fold) in the WT between exponential and stationary phase, although this ORF was unresponsive in the mutant; deletion of this gene in P. furiosus COM1 resulted in no apparent phenotype. On the other hand, frame-shifting mutations in the mutant genome negatively impacted transcription of a flagellar biosynthesis operon (PF0329-PF0338).Consequently, cells in the mutant culture lacked flagella and, unlike the WT, showed minimal evidence of exopolysaccharide-based cell aggregation in post-exponential phase. Finally, electron microscopy of PF0331-PF0337 deletions in P. furiosus COM1 showed that absence of flagella impacted normal cell aggregation behavior and, furthermore, indicated that flagella play a key role, beyond motility, in the growth physiology of P. furiosus.« less

  20. A mutant (‘lab strain’) of the hyperthermophilic archaeon Pyrococcus furiosus, lacking flagella, has unusual growth physiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Derrick L.; Notey, Jaspreet S.; Chandrayan, Sanjeev K.

    In this paper, a mutant (‘lab strain’) of the hyperthermophilic archaeon Pyrococcus furiosus DSM3638 exhibited an extended exponential phase and atypical cell aggregation behavior. Genomic DNA from the mutant culture was sequenced and compared to wild-type (WT) DSM3638, revealing 145 genes with one or more insertions, deletions, or substitutions (12 silent, 33 amino acid substitutions, and 100 frame shifts). Approximately, half of the mutated genes were transposases or hypothetical proteins. The WT transcriptome revealed numerous changes in amino acid and pyrimidine biosynthesis pathways coincidental with growth phase transitions, unlike the mutant whose transcriptome reflected the observed prolonged exponential phase. Targetedmore » gene deletions, based on frame-shifted ORFs in the mutant genome, in a genetically tractable strain of P. furiosus (COM1) could not generate the extended exponential phase behavior observed for the mutant. For example, a putative radical SAM family protein (PF2064) was the most highly up-regulated ORF (>25-fold) in the WT between exponential and stationary phase, although this ORF was unresponsive in the mutant; deletion of this gene in P. furiosus COM1 resulted in no apparent phenotype. On the other hand, frame-shifting mutations in the mutant genome negatively impacted transcription of a flagellar biosynthesis operon (PF0329-PF0338).Consequently, cells in the mutant culture lacked flagella and, unlike the WT, showed minimal evidence of exopolysaccharide-based cell aggregation in post-exponential phase. Finally, electron microscopy of PF0331-PF0337 deletions in P. furiosus COM1 showed that absence of flagella impacted normal cell aggregation behavior and, furthermore, indicated that flagella play a key role, beyond motility, in the growth physiology of P. furiosus.« less

  1. Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas

    PubMed Central

    Philibert, Aurore; Loyce, Chantal; Makowski, David

    2012-01-01

    Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430

  2. The size distribution of Pacific Seamounts

    NASA Astrophysics Data System (ADS)

    Smith, Deborah K.; Jordan, Thomas H.

    1987-11-01

    An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.

  3. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  4. Exponential gain of randomness certified by quantum contextuality

    NASA Astrophysics Data System (ADS)

    Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan

    2017-04-01

    We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.

  5. What Is a Mild Winter? Regional Differences in Within-Species Responses to Climate Change.

    PubMed

    Vetter, Sebastian G; Ruf, Thomas; Bieber, Claudia; Arnold, Walter

    2015-01-01

    Climate change is known to affect ecosystems globally, but our knowledge of its impact on large and widespread mammals, and possibly population-specific responses is still sparse. We investigated large-scale and long-term effects of climate change on local population dynamics using the wild boar (Sus scrofa L.) as a model species. Our results show that population increases across Europe are strongly associated with increasingly mild winters, yet with region-specific threshold temperatures for the onset of exponential growth. Additionally, we found that abundant availability of critical food resources, e.g. beech nuts, can outweigh the negative effects of cold winters on population growth of wild boar. Availability of beech nuts is highly variable and highest in years of beech mast which increased in frequency since 1980, according to our data. We conclude that climate change drives population growth of wild boar directly by relaxing the negative effect of cold winters on survival and reproduction, and indirectly by increasing food availability. However, region-specific responses need to be considered in order to fully understand a species' demographic response to climate change.

  6. Fourier transform inequalities for phylogenetic trees.

    PubMed

    Matsen, Frederick A

    2009-01-01

    Phylogenetic invariants are not the only constraints on site-pattern frequency vectors for phylogenetic trees. A mutation matrix, by its definition, is the exponential of a matrix with non-negative off-diagonal entries; this positivity requirement implies non-trivial constraints on the site-pattern frequency vectors. We call these additional constraints "edge-parameter inequalities". In this paper, we first motivate the edge-parameter inequalities by considering a pathological site-pattern frequency vector corresponding to a quartet tree with a negative internal edge. This site-pattern frequency vector nevertheless satisfies all of the constraints described up to now in the literature. We next describe two complete sets of edge-parameter inequalities for the group-based models; these constraints are square-free monomial inequalities in the Fourier transformed coordinates. These inequalities, along with the phylogenetic invariants, form a complete description of the set of site-pattern frequency vectors corresponding to bona fide trees. Said in mathematical language, this paper explicitly presents two finite lists of inequalities in Fourier coordinates of the form "monomial < or = 1", each list characterizing the phylogenetically relevant semialgebraic subsets of the phylogenetic varieties.

  7. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  8. A mechanical model of bacteriophage DNA ejection

    NASA Astrophysics Data System (ADS)

    Arun, Rahul; Ghosal, Sandip

    2017-08-01

    Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.

  9. Determination of Noncovalent Binding Using a Continuous Stirred Tank Reactor as a Flow Injection Device Coupled to Electrospray Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Santos, Inês C.; Waybright, Veronica B.; Fan, Hui; Ramirez, Sabra; Mesquita, Raquel B. R.; Rangel, António O. S. S.; Fryčák, Petr; Schug, Kevin A.

    2015-07-01

    Described is a new method based on the concept of controlled band dispersion, achieved by hyphenating flow injection analysis with ESI-MS for noncovalent binding determinations. A continuous stirred tank reactor (CSTR) was used as a FIA device for exponential dilution of an equimolar host-guest solution over time. The data obtained was treated for the noncovalent binding determination using an equimolar binding model. Dissociation constants between vancomycin and Ac-Lys(Ac)-Ala-Ala-OH peptide stereoisomers were determined using both the positive and negative ionization modes. The results obtained for Ac- L-Lys(Ac)- D-Ala- D-Ala (a model for a Gram-positive bacterial cell wall) binding were in reasonable agreement with literature values made by other mass spectrometry binding determination techniques. Also, the developed method allowed the determination of dissociation constants for vancomycin with Ac- L-Lys(Ac)- D-Ala- L-Ala, Ac- L-Lys(Ac)- L-Ala- D-Ala, and Ac- L-Lys(Ac)- L-Ala- L-Ala. Although some differences in measured binding affinities were noted using different ionization modes, the results of each determination were generally consistent. Differences are likely attributable to the influence of a pseudo-physiological ammonium acetate buffer solution on the formation of positively- and negatively-charged ionic complexes.

  10. A new approach to the extraction of single exponential diode model parameters

    NASA Astrophysics Data System (ADS)

    Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.

    2018-06-01

    A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.

  11. In Situ Dynamics of F-Specific RNA Bacteriophages in a Small River: New Way to Assess Viral Propagation in Water Quality Studies.

    PubMed

    Fauvel, Blandine; Gantzer, Christophe; Cauchie, Henry-Michel; Ogorzaly, Leslie

    2017-03-01

    The occurrence and propagation of enteric viruses in rivers constitute a major public health issue. However, little information is available on the in situ transport and spread of viruses in surface water. In this study, an original in situ experimental approach using the residence time of the river water mass was developed to accurately follow the propagation of F-specific RNA bacteriophages (FRNAPHs) along a 3-km studied river. Surface water and sediment of 9 sampling campaigns were collected and analyzed using both infectivity and RT-qPCR assays. In parallel, some physico-chemical variables such as flow rate, water temperature, conductivity and total suspended solids were measured to investigate the impact of environmental conditions on phage propagation. For campaigns with low flow rate and high temperature, the results highlight a decrease of infectious phage concentration along the river, which was successfully modelled according to a first-order negative exponential decay. The monitoring of infectious FRNAPHs belonging mainly to the genogroup II was confirmed with direct phage genotyping and total phage particle quantification. Reported k decay coefficients according to exponential models allowed for the determination of the actual in situ distance and time necessary for removing 90 % of infectious phage particles. This present work provides a new way to assess the true in situ viral propagation along a small river. These findings can be highly useful in water quality and risk assessment studies to determine the viral contamination spread from a point contamination source to the nearest recreational areas.

  12. Gaussian quadrature exponential sum modeling of near infrared methane laboratory spectra obtained at temperatures from 106 to 297 K

    NASA Technical Reports Server (NTRS)

    Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.

    1990-01-01

    Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.

  13. Modeling the pressure inactivation of Escherichia coli and Salmonella typhimurium in sapote mamey ( Pouteria sapota (Jacq.) H.E. Moore & Stearn) pulp.

    PubMed

    Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto

    2018-03-01

    High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj  > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.

  14. Exponential Stellar Disks in Low Surface Brightness Galaxies: A Critical Test of Viscous Evolution

    NASA Astrophysics Data System (ADS)

    Bell, Eric F.

    2002-12-01

    Viscous redistribution of mass in Milky Way-type galactic disks is an appealing way of generating an exponential stellar profile over many scale lengths, almost independent of initial conditions, requiring only that the viscous timescale and star formation timescale are approximately equal. However, galaxies with solid-body rotation curves cannot undergo viscous evolution. Low surface brightness (LSB) galaxies have exponential surface brightness profiles, yet have slowly rising, nearly solid-body rotation curves. Because of this, viscous evolution may be inefficient in LSB galaxies: the exponential profiles, instead, would give important insight into initial conditions for galaxy disk formation. Using star formation laws from the literature and tuning the efficiency of viscous processes to reproduce an exponential stellar profile in Milky Way-type galaxies, I test the role of viscous evolution in LSB galaxies. Under the conservative and not unreasonable condition that LSB galaxies are gravitationally unstable for at least a part of their lives, I find that it is impossible to rule out a significant role for viscous evolution. This type of model still offers an attractive way of producing exponential disks, even in LSB galaxies with slowly rising rotation curves.

  15. Spacetime dynamics of a Higgs vacuum instability during inflation

    DOE PAGES

    East, William E.; Kearney, John; Shakya, Bibhushan; ...

    2017-01-31

    A remarkable prediction of the Standard Model is that, in the absence of corrections lifting the energy density, the Higgs potential becomes negative at large field values. If the Higgs field samples this part of the potential during inflation, the negative energy density may locally destabilize the spacetime. Here, we use numerical simulations of the Einstein equations to study the evolution of inflation-induced Higgs fluctuations as they grow towards the true (negative-energy) minimum. Our simulations show that forming a single patch of true vacuum in our past light cone during inflation is incompatible with the existence of our Universe; themore » boundary of the true vacuum region grows outward in a causally disconnected manner from the crunching interior, which forms a black hole. We also find that these black hole horizons may be arbitrarily elongated—even forming black strings—in violation of the hoop conjecture. Furthermore, by extending the numerical solution of the Fokker-Planck equation to the exponentially suppressed tails of the field distribution at large field values, we derive a rigorous correlation between a future measurement of the tensor-to-scalar ratio and the scale at which the Higgs potential must receive stabilizing corrections in order for the Universe to have survived inflation until today.« less

  16. Channel length dependence of negative-bias-illumination-stress in amorphous-indium-gallium-zinc-oxide thin-film transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Um, Jae Gwang; Mativenga, Mallory; Jang, Jin, E-mail: jjang@khu.ac.kr

    2015-06-21

    We have investigated the dependence of Negative-Bias-illumination-Stress (NBIS) upon channel length, in amorphous-indium-gallium-zinc-oxide (a-IGZO) thin-film transistors (TFTs). The negative shift of the transfer characteristic associated with NBIS decreases for increasing channel length and is practically suppressed in devices with L = 100-μm. The effect is consistent with creation of donor defects, mainly in the channel regions adjacent to source and drain contacts. Excellent agreement with experiment has been obtained by an analytical treatment, approximating the distribution of donors in the active layer by a double exponential with characteristic length L{sub D} ∼ L{sub n} ∼ 10-μm, the latter being the electron diffusion length. The model alsomore » shows that a device with a non-uniform doping distribution along the active layer is in all equivalent, at low drain voltages, to a device with the same doping averaged over the active layer length. These results highlight a new aspect of the NBIS mechanism, that is, the dependence of the effect upon the relative magnitude of photogenerated holes and electrons, which is controlled by the device potential/band profile. They may also provide the basis for device design solutions to minimize NBIS.« less

  17. Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field

    NASA Astrophysics Data System (ADS)

    Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi

    2018-02-01

    We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.

  18. Observational constraints on tachyonic chameleon dark energy model

    NASA Astrophysics Data System (ADS)

    Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.

    2018-03-01

    It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.

  19. Cosmological models with a hybrid scale factor in an extended gravity theory

    NASA Astrophysics Data System (ADS)

    Mishra, B.; Tripathy, S. K.; Tarai, Sankarsan

    2018-03-01

    A general formalism to investigate Bianchi type V Ih universes is developed in an extended theory of gravity. A minimally coupled geometry and matter field is considered with a rescaled function of f(R,T) substituted in place of the Ricci scalar R in the geometrical action. Dynamical aspects of the models are discussed by using a hybrid scale factor (HSF) that behaves as power law in an initial epoch and as an exponential form at late epoch. The power law behavior and the exponential behavior appear as two extreme cases of the present model.

  20. Analysis of volumetric response of pituitary adenomas receiving adjuvant CyberKnife stereotactic radiosurgery with the application of an exponential fitting model

    PubMed Central

    Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan

    2017-01-01

    Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913

  1. Temperature dependence of negative bias under illumination stress and recovery in amorphous indium gallium zinc oxide thin film transistors

    NASA Astrophysics Data System (ADS)

    Hossain Chowdhury, Md Delwar; Migliorato, Piero; Jang, Jin

    2013-04-01

    We have investigated the temperature dependence of negative bias under illumination stress and recovery. The transfer characteristics exhibits a non-rigid shift towards negative gate voltages. For both stress and recovery, the voltage shift in deep depletion is twice that in accumulation. The results support the mechanism we previously proposed, which is creation and annealing of a double donor, likely to be an oxygen vacancy. The time dependence of stress and recovery can be fitted to stretched exponentials. Both processes are thermally activated with activation energies 1.06 eV and 1.25 eV for stress and recovery, respectively. A potential energy diagram is proposed to explain the results.

  2. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  3. Locality of the Thomas-Fermi-von Weizsäcker Equations

    NASA Astrophysics Data System (ADS)

    Nazar, F. Q.; Ortner, C.

    2017-06-01

    We establish a pointwise stability estimate for the Thomas-Fermi-von Weiz-säcker (TFW) model, which demonstrates that a local perturbation of a nuclear arrangement results also in a local response in the electron density and electrostatic potential. The proof adapts the arguments for existence and uniqueness of solutions to the TFW equations in the thermodynamic limit by Catto et al. (The mathematical theory of thermodynamic limits: Thomas-Fermi type models. Oxford mathematical monographs. The Clarendon Press, Oxford University Press, New York, 1998). To demonstrate the utility of this combined locality and stability result we derive several consequences, including an exponential convergence rate for the thermodynamic limit, partition of total energy into exponentially localised site energies (and consequently, exponential locality of forces), and generalised and strengthened results on the charge neutrality of local defects.

  4. Exponential Potential versus Dark Matter

    DTIC Science & Technology

    1993-10-15

    scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the

  5. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  6. Molecular sensing using monolayer floating gate, fully depleted SOI MOSFET acting as an exponential transducer.

    PubMed

    Takulapalli, Bharath R

    2010-02-23

    Field-effect transistor-based chemical sensors fall into two broad categories based on the principle of signal transduction-chemiresistor or Schottky-type devices and MOSFET or inversion-type devices. In this paper, we report a new inversion-type device concept-fully depleted exponentially coupled (FDEC) sensor, using molecular monolayer floating gate fully depleted silicon on insulator (SOI) MOSFET. Molecular binding at the chemical-sensitive surface lowers the threshold voltage of the device inversion channel due to a unique capacitive charge-coupling mechanism involving interface defect states, causing an exponential increase in the inversion channel current. This response of the device is in opposite direction when compared to typical MOSFET-type sensors, wherein inversion current decreases in a conventional n-channel sensor device upon addition of negative charge to the chemical-sensitive device surface. The new sensor architecture enables ultrahigh sensitivity along with extraordinary selectivity. We propose the new sensor concept with the aid of analytical equations and present results from our experiments in liquid phase and gas phase to demonstrate the new principle of signal transduction. We present data from numerical simulations to further support our theory.

  7. Single impacts of keV fullerene ions on free standing graphene: Emission of ions and electrons from confined volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verkhoturov, Stanislav V.; Geng, Sheng; Schweikert, Emile A., E-mail: schweikert@chem.tamu.edu

    We present the first data from individual C{sub 60} impacting one to four layer graphene at 25 and 50 keV. Negative secondary ions and electrons emitted in transmission were recorded separately from each impact. The yields for C{sub n}{sup −} clusters are above 10% for n ≤ 4, they oscillate with electron affinities and decrease exponentially with n. The result can be explained with the aid of MD simulation as a post-collision process where sufficient vibrational energy is accumulated around the rim of the impact hole for sputtering of carbon clusters. The ionization probability can be estimated by comparing experimentalmore » yields of C{sub n}{sup −} with those of C{sub n}{sup 0} from MD simulation, where it increases exponentially with n. The ionization probability can be approximated with ejecta from a thermally excited (3700 K) rim damped by cluster fragmentation and electron detachment. The experimental electron probability distributions are Poisson-like. On average, three electrons of thermal energies are emitted per impact. The thermal excitation model invoked for C{sub n}{sup −} emission can also explain the emission of electrons. The interaction of C{sub 60} with graphene is fundamentally different from impacts on 3D targets. A key characteristic is the high degree of ionization of the ejecta.« less

  8. The impact of accelerating faster than exponential population growth on genetic variation.

    PubMed

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-03-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.

  9. Ammonium Removal from Aqueous Solutions by Clinoptilolite: Determination of Isotherm and Thermodynamic Parameters and Comparison of Kinetics by the Double Exponential Model and Conventional Kinetic Models

    PubMed Central

    Tosun, İsmail

    2012-01-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177

  10. Ammonium removal from aqueous solutions by clinoptilolite: determination of isotherm and thermodynamic parameters and comparison of kinetics by the double exponential model and conventional kinetic models.

    PubMed

    Tosun, Ismail

    2012-03-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  11. Sorption isotherm characteristics of aonla flakes.

    PubMed

    Alam, Md Shafiq; Singh, Amarjit

    2011-06-01

    The equilibrium moisture content was determined for un-osmosed and osmosed (salt osmosed and sugar osmosed) aonla flakes using the static method at temperatures of 25, 40,50, 60 and 70 °C over a range of relative humidities from 20 to 90%. The sorption capacity of aonla decreased with an increase in temperature at constant water activity. The sorption isotherms exhibited hysteresis, in which the equilibrium moisture content was higher at a particular equilibrium relative humidity for desorption curve than for adsorption. The hysteresis effect was more pertinent for un-osmosed and salt osmosed samples in comparison to sugar osmosed samples. Five models namely the modified Chung Pfost, modified Halsey, modified Henderson, modified Exponential and Guggenheim-Anderson-de Boer (GAB) were evaluated to determine the best fit for the experimental data. For both adsorption and desorption process of aonla fruit, the equilibrium moisture content of un-osmosed and osmosed aonla samples can be predicted well by GAB model as well as modified Exponential model. Moreover, the modified Exponential model was found to be the best for describing the sorption behaviour of un-osmosed and salt osmosed samples while, GAB model for sugar osmosed aonla samples.

  12. Risky future for Mediterranean forests unless they undergo extreme carbon fertilization.

    PubMed

    Gea-Izquierdo, Guillermo; Nicault, Antoine; Battipaglia, Giovanna; Dorado-Liñán, Isabel; Gutiérrez, Emilia; Ribas, Montserrat; Guiot, Joel

    2017-07-01

    Forest performance is challenged by climate change but higher atmospheric [CO 2 ] (c a ) could help trees mitigate the negative effect of enhanced water stress. Forest projections using data assimilation with mechanistic models are a valuable tool to assess forest performance. Firstly, we used dendrochronological data from 12 Mediterranean tree species (six conifers and six broadleaves) to calibrate a process-based vegetation model at 77 sites. Secondly, we conducted simulations of gross primary production (GPP) and radial growth using an ensemble of climate projections for the period 2010-2100 for the high-emission RCP8.5 and low-emission RCP2.6 scenarios. GPP and growth projections were simulated using climatic data from the two RCPs combined with (i) expected c a ; (ii) constant c a  = 390 ppm, to test a purely climate-driven performance excluding compensation from carbon fertilization. The model accurately mimicked the growth trends since the 1950s when, despite increasing c a , enhanced evaporative demands precluded a global net positive effect on growth. Modeled annual growth and GPP showed similar long-term trends. Under RCP2.6 (i.e., temperatures below +2 °C with respect to preindustrial values), the forests showed resistance to future climate (as expressed by non-negative trends in growth and GPP) except for some coniferous sites. Using exponentially growing c a and climate as from RCP8.5, carbon fertilization overrode the negative effect of the highly constraining climatic conditions under that scenario. This effect was particularly evident above 500 ppm (which is already over +2 °C), which seems unrealistic and likely reflects model miss-performance at high c a above the calibration range. Thus, forest projections under RCP8.5 preventing carbon fertilization displayed very negative forest performance at the regional scale. This suggests that most of western Mediterranean forests would successfully acclimate to the coldest climate change scenario but be vulnerable to a climate warmer than +2 °C unless the trees developed an exaggerated fertilization response to [CO 2 ]. © 2017 John Wiley & Sons Ltd.

  13. Pore‐Scale Hydrodynamics in a Progressively Bioclogged Three‐Dimensional Porous Medium: 3‐D Particle Tracking Experiments and Stochastic Transport Modeling

    PubMed Central

    Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.

    2018-01-01

    Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184

  14. Pore-Scale Hydrodynamics in a Progressively Bioclogged Three-Dimensional Porous Medium: 3-D Particle Tracking Experiments and Stochastic Transport Modeling

    NASA Astrophysics Data System (ADS)

    Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2018-03-01

    Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.

  15. Removing the tree-ring width biological trend using expected basal area increment

    Treesearch

    Franco Biondi; Fares Qeadan

    2008-01-01

    One of the main elements of dendrochronological standardization is the removal of the biological trend, i.e., the progressive decline of ring width along a cross-sectional radius that is mostly caused by the corresponding increase in stem diameter over time. A very common option for removing this biological trend is to fit a modified negative exponential curve to the...

  16. Modelling the Effects of Ageing Time of Starch on the Enzymatic Activity of Three Amylolytic Enzymes

    PubMed Central

    Guerra, Nelson P.; Pastrana Castro, Lorenzo

    2012-01-01

    The effect of increasing ageing time (t) of starch on the activity of three amylolytic enzymes (Termamyl, San Super, and BAN) was investigated. Although all the enzymatic reactions follow michaelian kinetics, v max decreased significantly (P < 0.05) and K M increased (although not always significantly) with the increase in t. The conformational changes produced in the starch chains as a consequence of the ageing seemed to affect negatively the diffusivity of the starch to the active site of the enzymes and the release of the reaction products to the medium. A similar effect was observed when the enzymatic reactions were carried out with unaged starches supplemented with different concentrations of gelatine [G]. The inhibition in the amylolytic activities was best mathematically described by using three modified forms of the Michaelis-Menten model, which included a term to consider, respectively, the linear, exponential, and hyperbolic inhibitory effects of t and [G]. PMID:22666116

  17. Statistical power for detecting trends with applications to seabird monitoring

    USGS Publications Warehouse

    Hatch, Shyla A.

    2003-01-01

    Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.

  18. Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control.

    PubMed

    Yang, Shiju; Li, Chuandong; Huang, Tingwen

    2016-03-01

    The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Analysis of Dibenzothiophene Desulfurization in a Recombinant Pseudomonas putida Strain▿

    PubMed Central

    Calzada, Javier; Zamarro, María T.; Alcón, Almudena; Santos, Victoria E.; Díaz, Eduardo; García, José L.; Garcia-Ochoa, Felix

    2009-01-01

    Biodesulfurization was monitored in a recombinant Pseudomonas putida CECT5279 strain. DszB desulfinase activity reached a sharp maximum at the early exponential phase, but it rapidly decreased at later growth phases. A model two-step resting-cell process combining sequentially P. putida cells from the late and early exponential growth phases was designed to significantly increase biodesulfurization. PMID:19047400

  20. How does temperature affect forest "fungus breath"? Diurnal non-exponential temperature-respiration relationship, and possible longer-term acclimation in fungal sporocarps

    Treesearch

    Erik A. Lilleskov

    2017-01-01

    Fungal respiration contributes substantially to ecosystem respiration, yet its field temperature response is poorly characterized. I hypothesized that at diurnal time scales, temperature-respiration relationships would be better described by unimodal than exponential models, and at longer time scales both Q10 and mass-specific respiration at 10 °...

  1. Well hydraulics in pumping tests with exponentially decayed rates of abstraction in confined aquifers

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen

    2017-05-01

    Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.

  2. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    PubMed

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  3. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  4. An efficient and accurate technique to compute the absorption, emission, and transmission of radiation by the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.

    1990-01-01

    CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.

  5. Critical Mutation Rate Has an Exponential Dependence on Population Size in Haploid and Diploid Populations

    PubMed Central

    Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.

    2013-01-01

    Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200

  6. Using phenomenological models for forecasting the 2015 Ebola challenge.

    PubMed

    Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo

    2018-03-01

    The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Enhancement of Markov chain model by integrating exponential smoothing: A case study on Muslims marriage and divorce

    NASA Astrophysics Data System (ADS)

    Jamaluddin, Fadhilah; Rahim, Rahela Abdul

    2015-12-01

    Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.

  8. Ocean Chlorophyll Studies from a U-2 Aircraft Platform

    NASA Technical Reports Server (NTRS)

    Kim, H. H.; Mcclain, C. R.; Blaine, L. R.; Hart, W. D.; Atkinson, L. P.; Yoder, J. A.

    1979-01-01

    Chlorophyll gradient maps of large ocean areas were generated from U-2 ocean color scanner data obtained over test sites in the Pacific and Atlantic Oceans. The delineation of oceanic features using the upward radiant intensity relies on an analysis method which presupposes that radiation backscattered from the atmosphere and ocean surface can be properly modeled using a measurement made at 778 nm. An estimation of the chlorophyll concentration was performed by properly ratioing radiances measured at 472 nm and 548 nm after removing the atmospheric effects. The correlation between the remotely sensed data and in-situ surface chlorophyll measurements was validated in two sets of data. The results show that the correlation between the in-situ measured chlorophyll and the derived quantity is a negative exponential function and the correlation coefficient was calculated to be -0.965.

  9. Exponential approximation for daily average solar heating or photolysis. [of stratospheric ozone layer

    NASA Technical Reports Server (NTRS)

    Cogley, A. C.; Borucki, W. J.

    1976-01-01

    When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.

  10. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  11. Three-Dimensional Flow of Nanofluid Induced by an Exponentially Stretching Sheet: An Application to Solar Energy

    PubMed Central

    Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.

    2015-01-01

    This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857

  12. Determination of the functioning parameters in asymmetrical flow field-flow fractionation with an exponential channel.

    PubMed

    Déjardin, P

    2013-08-30

    The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Quantitative proteomic analysis reveals a simple strategy of global resource allocation in bacteria

    PubMed Central

    Hui, Sheng; Silverman, Josh M; Chen, Stephen S; Erickson, David W; Basan, Markus; Wang, Jilong; Hwa, Terence; Williamson, James R

    2015-01-01

    A central aim of cell biology was to understand the strategy of gene expression in response to the environment. Here, we study gene expression response to metabolic challenges in exponentially growing Escherichia coli using mass spectrometry. Despite enormous complexity in the details of the underlying regulatory network, we find that the proteome partitions into several coarse-grained sectors, with each sector's total mass abundance exhibiting positive or negative linear relations with the growth rate. The growth rate-dependent components of the proteome fractions comprise about half of the proteome by mass, and their mutual dependencies can be characterized by a simple flux model involving only two effective parameters. The success and apparent generality of this model arises from tight coordination between proteome partition and metabolism, suggesting a principle for resource allocation in proteome economy of the cell. This strategy of global gene regulation should serve as a basis for future studies on gene expression and constructing synthetic biological circuits. Coarse graining may be an effective approach to derive predictive phenomenological models for other ‘omics’ studies. PMID:25678603

  14. Too Big to Be Real? No Depleted Core in Holm 15A

    NASA Astrophysics Data System (ADS)

    Bonfini, Paolo; Dullo, Bililign T.; Graham, Alister W.

    2015-07-01

    Partially depleted cores, as measured by core-Sérsic model “break radii,” are typically tens to a few hundred parsecs in size. Here we investigate the unusually large ({R}γ \\prime =0.5 = 4.57 kpc) depleted core recently reported for Holm 15A, the brightest cluster galaxy of Abell 85. We model the one-dimensional (1D) light profile, and also the two-dimensional (2D) image (using Galfit-Corsair, a tool for fitting the core-Sérsic model in 2D). We find good agreement between the 1D and 2D analyses, with minor discrepancies attributable to intrinsic ellipticity gradients. We show that a simple Sérsic profile (with a low index n and no depleted core) plus the known outer exponential “halo” provide a good description of the stellar distribution. We caution that while almost every galaxy light profile will have a radius where the negative logarithmic slope of the intensity profile γ \\prime equals 0.5, this alone does not imply the presence of a partially depleted core within this radius.

  15. Exponential growth for self-reproduction in a catalytic reaction network: relevance of a minority molecular species and crowdedness

    NASA Astrophysics Data System (ADS)

    Kamimura, Atsushi; Kaneko, Kunihiko

    2018-03-01

    Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary for exponential growth at the primitive stage of life.

  16. Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.

    2001-01-01

    The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.

  17. Exponential stabilization of magnetoelastic waves in a Mindlin-Timoshenko plate by localized internal damping

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-08-01

    This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.

  18. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  19. Estimating piecewise exponential frailty model with changing prior for baseline hazard function

    NASA Astrophysics Data System (ADS)

    Thamrin, Sri Astuti; Lawi, Armin

    2016-02-01

    Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.

  20. The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation

    PubMed Central

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-01-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333

  1. Holographic insulator/superconductor transition with exponential nonlinear electrodynamics probed by entanglement entropy

    NASA Astrophysics Data System (ADS)

    Yao, Weiping; Yang, Chaohui; Jing, Jiliang

    2018-05-01

    From the viewpoint of holography, we study the behaviors of the entanglement entropy in insulator/superconductor transition with exponential nonlinear electrodynamics (ENE). We find that the entanglement entropy is a good probe to the properties of the holographic phase transition. Both in the half space and the belt space, the non-monotonic behavior of the entanglement entropy in superconducting phase versus the chemical potential is general in this model. Furthermore, the behavior of the entanglement entropy for the strip geometry shows that the confinement/deconfinement phase transition appears in both insulator and superconductor phases. And the critical width of the confinement/deconfinement phase transition depends on the chemical potential and the exponential coupling term. More interestingly, the behaviors of the entanglement entropy in their corresponding insulator phases are independent of the exponential coupling factor but depends on the width of the subsystem A.

  2. Negative-Mass Instability of the Spin and Motion of an Atomic Gas Driven by Optical Cavity Backaction

    NASA Astrophysics Data System (ADS)

    Kohler, Jonathan; Gerber, Justin A.; Dowd, Emma; Stamper-Kurn, Dan M.

    2018-01-01

    We realize a spin-orbit interaction between the collective spin precession and center-of-mass motion of a trapped ultracold atomic gas, mediated by spin- and position-dependent dispersive coupling to a driven optical cavity. The collective spin, precessing near its highest-energy state in an applied magnetic field, can be approximated as a negative-mass harmonic oscillator. When the Larmor precession and mechanical motion are nearly resonant, cavity mediated coupling leads to a negative-mass instability, driving exponential growth of a correlated mode of the hybrid system. We observe this growth imprinted on modulations of the cavity field and estimate the full covariance of the resulting two-mode state by observing its transient decay during subsequent free evolution.

  3. Computational materials design of attractive Fermion system with large negative effective Ueff in the hole-doped Delafossite of CuAlO2, AgAlO2 and AuAlO2: Charge-excitation induced Ueff < 0

    NASA Astrophysics Data System (ADS)

    Nakanishi, A.; Fukushima, T.; Uede, H.; Katayama-Yoshida, H.

    2015-12-01

    On the basis of general design rules for negative effective U(Ueff) systems by controlling purely-electronic and attractive Fermion mechanisms, we perform computational materials design (CMD®) for the negative Ueff system in hole-doped two-dimensional (2D) Delafossite CuAlO2, AgAlO2 and AuAlO2 by ab initio calculations with local density approximation (LDA) and self-interaction corrected-LDA (SIC-LDA). It is found that the large negative Ueff in the hole-doped attractive Fermion systems for CuAlO2 (UeffLDA = - 4.53 eV and UeffSIC-LDA = - 4.20 eV), AgAlO2 (UeffLDA = - 4.88 eV and UeffSIC-LDA = - 4.55 eV) and AuAlO2 (UeffLDA = - 4.14 eV and UeffSIC-LDA = - 3.55 eV). These values are 10 times larger than that in hole-doped three-dimensional (3D) CuFeS2 (Ueff = - 0.44 eV). For future calculations of Tc and phase diagram by quantum Monte Carlo simulations, we propose the negative Ueff Hubbard model with the anti-bonding single π-band model for CuAlO2, AgAlO2 and AuAlO2 using the mapped parameters obtained from ab initio electronic structure calculations. Based on the theory of negative Ueff Hubbard model (Noziéres and Schmitt-Rink, 1985), we discuss |Ueff| dependence of superconducting critical temperature (Tc) in the 2D Delafossite of CuAlO2, AgAlO2 and AuAlO2 and 3D Chalcopyrite of CuFeS2, which shows the interesting chemical trend, i.e., Tc increases exponentially (Tc ∝ exp [ - 1 / | Ueff | ]) in the weak coupling regime | Ueff(- 0.44 eV) | < W(∼ 2 eV) (where W is the band width of the negative Ueff Hubbard model) for the hole-doped CuFeS2, and then Tc goes through a maximum when | Ueff(- 4.88 eV , - 4.14 eV) | ∼ W(2.8 eV , 3.5 eV) for the hole-doped AgAlO2 and AuAlO2, and finally Tc decreases with increasing |Ueff| in the strong coupling regime, where | Ueff(- 4.53 eV) | > W(1.7 eV) , for the hole-doped CuAlO2.

  4. Mathematical modeling of ethanol production in solid-state fermentation based on solid medium' dry weight variation.

    PubMed

    Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad

    2018-04-21

    In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.

  5. Exponential integration algorithms applied to viscoplasticity

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.

  6. Exponential parameter and tracking error convergence guarantees for adaptive controllers without persistency of excitation

    NASA Astrophysics Data System (ADS)

    Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric

    2014-08-01

    In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.

  7. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  8. Hole transport characteristics in phosphorescent dye-doped NPB films by admittance spectroscopy

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Chen, Jiangshan; Huang, Jinying; Dai, Yanfeng; Zhang, Zhiqiang; Liu, Su; Ma, Dongge

    2014-05-01

    Admittance spectroscopy is a powerful tool to determine the carrier mobility. The carrier mobility is a significant parameter to understand the behavior or to optimize the organic light-emitting diode or other organic semiconductor devices. Hole transport in phosphorescent dye, bis[2-(9,9-diethyl-9H-fluoren-2-yl)-1-phenyl-1Hbenzoimidazol-N,C3] iridium(acetylacetonate [(fbi)2Ir(acac)]) doped into N,N-diphenyl-N,N-bis(1-naphthylphenyl)-1,1-biphenyl-4,4-diamine (NPB) films was investigated by admittance spectroscopy. The results show that doped (fbi)2Ir(acac) molecules behave as hole traps in NPB, and lower the hole mobility. For thicker films(≳300 nm), the electric field dependence of hole mobility is as expected positive, i.e., the mobility increases exponentially with the electric field. However, for thinner films (≲300 nm), the electric field dependence of hole mobility is negative, i.e., the hole mobility decreases exponentially with the electric field. Physical mechanisms behind the negative field dependence of hole mobility are discussed. In addition, three frequency regions were divided to analyze the behaviors of the capacitance in the hole-only device and the physical mechanism was explained by trap theory and the parasitic capacitance effect.

  9. Weblog patterns and human dynamics with decreasing interest

    NASA Astrophysics Data System (ADS)

    Guo, J.-L.; Fan, C.; Guo, Z.-H.

    2011-06-01

    In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.

  10. Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.

    PubMed

    van Elburg, Ronald A J; van Ooyen, Arjen

    2009-07-01

    An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.

  11. Exponentiated power Lindley distribution.

    PubMed

    Ashour, Samir K; Eltehiwy, Mahmoud A

    2015-11-01

    A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.

  12. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  13. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    NASA Astrophysics Data System (ADS)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  14. Quasiprobability behind the out-of-time-ordered correlator

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Swingle, Brian; Dressel, Justin

    2018-04-01

    Two topics, evolving rapidly in separate fields, were combined recently: the out-of-time-ordered correlator (OTOC) signals quantum-information scrambling in many-body systems. The Kirkwood-Dirac (KD) quasiprobability represents operators in quantum optics. The OTOC was shown to equal a moment of a summed quasiprobability [Yunger Halpern, Phys. Rev. A 95, 012120 (2017), 10.1103/PhysRevA.95.012120]. That quasiprobability, we argue, is an extension of the KD distribution. We explore the quasiprobability's structure from experimental, numerical, and theoretical perspectives. First, we simplify and analyze Yunger Halpern's weak-measurement and interference protocols for measuring the OTOC and its quasiprobability. We decrease, exponentially in system size, the number of trials required to infer the OTOC from weak measurements. We also construct a circuit for implementing the weak-measurement scheme. Next, we calculate the quasiprobability (after coarse graining) numerically and analytically: we simulate a transverse-field Ising model first. Then, we calculate the quasiprobability averaged over random circuits, which model chaotic dynamics. The quasiprobability, we find, distinguishes chaotic from integrable regimes. We observe nonclassical behaviors: the quasiprobability typically has negative components. It becomes nonreal in some regimes. The onset of scrambling breaks a symmetry that bifurcates the quasiprobability, as in classical-chaos pitchforks. Finally, we present mathematical properties. We define an extended KD quasiprobability that generalizes the KD distribution. The quasiprobability obeys a Bayes-type theorem, for example, that exponentially decreases the memory required to calculate weak values, in certain cases. A time-ordered correlator analogous to the OTOC, insensitive to quantum-information scrambling, depends on a quasiprobability closer to a classical probability. This work not only illuminates the OTOC's underpinnings, but also generalizes quasiprobability theory and motivates immediate-future weak-measurement challenges.

  15. Kinetic and Stochastic Models of 1D yeast ``prions"

    NASA Astrophysics Data System (ADS)

    Kunes, Kay

    2005-03-01

    Mammalian prion proteins (PrP) are of public health interest because of mad cow and chronic wasting diseases. Yeasts have proteins, which can undergo similar reconformation and aggregation processes to PrP; yeast ``prions" are simpler to experimentally study and model. Recent in vitro studies of the SUP35 protein (1), showed long aggregates and pure exponential growth of the misfolded form. To explain this data, we have extended a previous model of aggregation kinetics along with our own stochastic approach (2). Both models assume reconformation only upon aggregation, and include aggregate fissioning and an initial nucleation barrier. We find for sufficiently small nucleation rates or seeding by small dimer concentrations that we can achieve the requisite exponential growth and long aggregates.

  16. Unstable Mode Solutions to the Klein-Gordon Equation in Kerr-anti-de Sitter Spacetimes

    NASA Astrophysics Data System (ADS)

    Dold, Dominic

    2017-03-01

    For any cosmological constant {Λ = -3/ℓ2 < 0} and any {α < 9/4}, we find a Kerr-AdS spacetime {({M}, g_{KAdS})}, in which the Klein-Gordon equation {Box_{g_{KAdS}}ψ + α/ℓ2ψ = 0} has an exponentially growing mode solution satisfying a Dirichlet boundary condition at infinity. The spacetime violates the Hawking-Reall bound {r+2 > |a|ℓ}. We obtain an analogous result for Neumann boundary conditions if {5/4 < α < 9/4}. Moreover, in the Dirichlet case, one can prove that, for any Kerr-AdS spacetime violating the Hawking-Reall bound, there exists an open family of masses {α} such that the corresponding Klein-Gordon equation permits exponentially growing mode solutions. Our result adopts methods of Shlapentokh-Rothman developed in (Commun. Math. Phys. 329:859-891, 2014) and provides the first rigorous construction of a superradiant instability for negative cosmological constant.

  17. Pendulum Mass Affects the Measurement of Articular Friction Coefficient

    PubMed Central

    Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.

    2012-01-01

    Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223

  18. Pendulum mass affects the measurement of articular friction coefficient.

    PubMed

    Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C

    2013-02-01

    Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. The multiple complex exponential model and its application to EEG analysis

    NASA Astrophysics Data System (ADS)

    Chen, Dao-Mu; Petzold, J.

    The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.

  20. Disentangling inhibition-based and retrieval-based aftereffects of distractors: Cognitive versus motor processes.

    PubMed

    Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian

    2018-05-01

    Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Quantitative differentiation of breast lesions at 3T diffusion-weighted imaging (DWI) using the ratio of distributed diffusion coefficient (DDC).

    PubMed

    Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin

    2016-12-01

    To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Early detection of emerging forest disease using dispersal estimation and ecological niche modeling.

    PubMed

    Meentemeyer, Ross K; Anacker, Brian L; Mark, Walter; Rizzo, David M

    2008-03-01

    Distinguishing the manner in which dispersal limitation and niche requirements control the spread of invasive pathogens is important for prediction and early detection of disease outbreaks. Here, we use niche modeling augmented by dispersal estimation to examine the degree to which local habitat conditions vs. force of infection predict invasion of Phytophthora ramorum, the causal agent of the emerging infectious tree disease sudden oak death. We sampled 890 field plots for the presence of P. ramorum over a three-year period (2003-2005) across a range of host and abiotic conditions with variable proximities to known infections in California, USA. We developed and validated generalized linear models of invasion probability to analyze the relative predictive power of 12 niche variables and a negative exponential dispersal kernel estimated by likelihood profiling. Models were developed incrementally each year (2003, 2003-2004, 2003-2005) to examine annual variability in model parameters and to create realistic scenarios for using models to predict future infections and to guide early-detection sampling. Overall, 78 new infections were observed up to 33.5 km from the nearest known site of infection, with slightly increasing rates of prevalence across time windows (2003, 6.5%; 2003-2004, 7.1%; 2003-2005, 9.6%). The pathogen was not detected in many field plots that contained susceptible host vegetation. The generalized linear modeling indicated that the probability of invasion is limited by both dispersal and niche constraints. Probability of invasion was positively related to precipitation and temperature in the wet season and the presence of the inoculum-producing foliar host Umbellularia californica and decreased exponentially with distance to inoculum sources. Models that incorporated niche and dispersal parameters best predicted the locations of new infections, with accuracies ranging from 0.86 to 0.90, suggesting that the modeling approach can be used to forecast locations of disease spread. Application of the combined niche plus dispersal models in a geographic information system predicted the presence of P. ramorum across approximately 8228 km2 of California's 84785 km2 (9.7%) of land area with susceptible host species. This research illustrates how probabilistic modeling can be used to analyze the relative roles of niche and dispersal limitation in controlling the distribution of invasive pathogens.

  3. Accounting for inherent variability of growth in microbial risk assessment.

    PubMed

    Marks, H M; Coleman, M E

    2005-04-15

    Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.

  4. Infinite-disorder critical points of models with stretched exponential interactions

    NASA Astrophysics Data System (ADS)

    Juhász, Róbert

    2014-09-01

    We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.

  5. Global exponential stability for switched memristive neural networks with time-varying delays.

    PubMed

    Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia

    2016-08-01

    This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. An optimized Nash nonlinear grey Bernoulli model based on particle swarm optimization and its application in prediction for the incidence of Hepatitis B in Xinjiang, China.

    PubMed

    Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2014-06-01

    In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.

  7. Forecasting Inflow and Outflow of Money Currency in East Java Using a Hybrid Exponential Smoothing and Calendar Variation Model

    NASA Astrophysics Data System (ADS)

    Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut

    2018-03-01

    Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.

  8. Exponentially growing tearing modes in Rijnhuizen Tokamak Project plasmas.

    PubMed

    Salzedas, F; Schüller, F C; Oomens, A A M

    2002-02-18

    The local measurement of the island width w, around the resonant surface, allowed a direct test of the extended Rutherford model [P. H. Rutherford, PPPL Report-2277 (1985)], describing the evolution of radiation-induced tearing modes prior to disruptions of tokamak plasmas. It is found that this model accounts very well for the observed exponential growth and supports radiation losses as being the main driving mechanism. The model implies that the effective perpendicular electron heat conductivity in the island is smaller than the global one. Comparison of the local measurements of w with the magnetic perturbed field B showed that w proportional to B1/2 was valid for widths up to 18% of the minor radius.

  9. Exponential vanishing of the ground-state gap of the quantum random energy model via adiabatic quantum computing

    NASA Astrophysics Data System (ADS)

    Adame, J.; Warzel, S.

    2015-11-01

    In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.

  10. Exponential vanishing of the ground-state gap of the quantum random energy model via adiabatic quantum computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adame, J.; Warzel, S., E-mail: warzel@ma.tum.de

    In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.

  11. Disentangling the f(R)-duality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broy, Benedict J.; Pedro, Francisco G.; Westphal, Alexander

    2015-03-16

    Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1

  12. Disentangling the f(R)-duality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broy, Benedict J.; Westphal, Alexander; Pedro, Francisco G., E-mail: benedict.broy@desy.de, E-mail: francisco.pedro@desy.de, E-mail: alexander.westphal@desy.de

    2015-03-01

    Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1

  13. Exponentially Stabilizing Robot Control Laws

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Bayard, David S.

    1990-01-01

    New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.

  14. Testing predictions of the quantum landscape multiverse 2: the exponential inflationary potential

    NASA Astrophysics Data System (ADS)

    Di Valentino, Eleonora; Mersini-Houghton, Laura

    2017-03-01

    The 2015 Planck data release tightened the region of the allowed inflationary models. Inflationary models with convex potentials have now been ruled out since they produce a large tensor to scalar ratio. Meanwhile the same data offers interesting hints on possible deviations from the standard picture of CMB perturbations. Here we revisit the predictions of the theory of the origin of the universe from the landscape multiverse for the case of exponential inflation, for two reasons: firstly to check the status of the anomalies associated with this theory, in the light of the recent Planck data; secondly, to search for a counterexample whereby new physics modifications may bring convex inflationary potentials, thought to have been ruled out, back into the region of potentials allowed by data. Using the exponential inflation as an example of convex potentials, we find that the answer to both tests is positive: modifications to the perturbation spectrum and to the Newtonian potential of the universe originating from the quantum entanglement, bring the exponential potential, back within the allowed region of current data; and, the series of anomalies previously predicted in this theory, is still in good agreement with current data. Hence our finding for this convex potential comes at the price of allowing for additional thermal relic particles, equivalently dark radiation, in the early universe.

  15. Persistence of exponential bed thickness distributions in the stratigraphic record: Experiments and theory

    NASA Astrophysics Data System (ADS)

    Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.

    2010-12-01

    Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.

  16. Kinetics of lipid-nanoparticle-mediated intracellular mRNA delivery and function

    NASA Astrophysics Data System (ADS)

    Zhdanov, Vladimir P.

    2017-10-01

    mRNA delivery into cells forms the basis for one of the new and promising ways to treat various diseases. Among suitable carriers, lipid nanoparticles (LNPs) with a size of about 100 nm are now often employed. Despite high current interest in this area, the understanding of the basic details of LNP-mediated mRNA delivery and function is limited. To clarify the kinetics of mRNA release from LNPs, the author uses three generic models implying (i) exponential, (ii) diffusion-controlled, and (iii) detachment-controlled kinetic regimes, respectively. Despite the distinct differences in these kinetics, the associated transient kinetics of mRNA translation to the corresponding protein and its degradation are shown to be not too sensitive to the details of the mRNA delivery by LNPs (or other nanocarriers). In addition, the author illustrates how this protein may temporarily influence the expression of one gene or a few equivalent genes. The analysis includes positive or negative regulation of the gene transcription via the attachment of the protein without or with positive or negative feedback in the gene expression. Stable, bistable, and oscillatory schemes have been scrutinized in this context.

  17. Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models

    NASA Astrophysics Data System (ADS)

    Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei

    2016-06-01

    It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.

  18. Anomalous T2 relaxation in normal and degraded cartilage.

    PubMed

    Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G

    2016-09-01

    To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Modeling the degradation kinetics of ascorbic acid.

    PubMed

    Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R

    2018-06-13

    Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.

  20. Probabilistic properties of injection induced seismicity - implications for the seismic hazard analysis

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw; Urban, Pawel; Kwiatek, Grzegorz; Martinez-Garzón, Particia

    2017-04-01

    Injection induced seismicity (IIS) is an undesired dynamic rockmass response to massive fluid injections. This includes reactions, among others, to hydro-fracturing for shale gas exploitation. Complexity and changeability of technological factors that induce IIS, may result in significant deviations of the observed distributions of seismic process parameters from the models, which perform well in natural, tectonic seismic processes. Classic formulations of probabilistic seismic hazard analysis in natural seismicity assume the seismic marked point process to be a stationary Poisson process, whose marks - magnitudes are governed by a Gutenberg-Richter born exponential distribution. It is well known that the use of an inappropriate earthquake occurrence model and/or an inappropriate of magnitude distribution model leads to significant systematic errors of hazard estimates. It is therefore of paramount importance to check whether the mentioned, commonly used in natural seismicity assumptions on the seismic process, can be safely used in IIS hazard problems or not. Seismicity accompanying shale gas operations is widely studied in the framework of the project "Shale Gas Exploration and Exploitation Induced Risks" (SHEER). Here we present results of SHEER project investigations of such seismicity from Oklahoma and of a proxy of such seismicity - IIS data from The Geysers geothermal field. We attempt to answer to the following questions: • Do IIS earthquakes follow the Gutenberg-Richter distribution law, so that the magnitude distribution can be modelled by an exponential distribution? • Is the occurrence process of IIS earthquakes Poissonian? Is it segmentally Poissonian? If yes, how are these segments linked to cycles of technological operations? Statistical tests indicate that the Gutenberg-Richter relation born exponential distribution model for magnitude is, in general, inappropriate. The magnitude distribution can be complex, multimodal, with no ready-to-use functional model. In this connection, we recommend to use in hazard analyses non-parametric, kernel estimators of magnitude distribution. The earthquake occurrence process of IIS is not a Poisson process. When earthquakes' occurrences are influenced by a multitude of inducing factors, the interevent time distribution can be modelled by the Weibull distribution supporting a negative ageing property of the process. When earthquake occurrences are due to a specific injection activity, the earthquake rate directly depends on the injection rate and responds immediately to the changes of the injection rate. Furthermore, this response is not limited only to correlated variations of the seismic activity but it also concerns significant changes of the shape of interevent time distribution. Unlike the event rate, the shape of magnitude distribution does not exhibit correlation with the injection rate. This work was supported within SHEER: "Shale Gas Exploration and Exploitation Induced Risks" project funded from Horizon 2020 - R&I Framework Programme, call H2020-LCE 16-2014-1 and within statutory activities No3841/E-41/S/2016 of Ministry of Science and Higher Education of Poland.

  1. Comparative Analyses of Creep Models of a Solid Propellant

    NASA Astrophysics Data System (ADS)

    Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.

    2018-05-01

    The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.

  2. Simulation and prediction of the thuringiensin abiotic degradation processes in aqueous solution by a radius basis function neural network model.

    PubMed

    Zhou, Jingwen; Xu, Zhenghong; Chen, Shouwen

    2013-04-01

    The thuringiensin abiotic degradation processes in aqueous solution under different conditions, with a pH range of 5.0-9.0 and a temperature range of 10-40°C, were systematically investigated by an exponential decay model and a radius basis function (RBF) neural network model, respectively. The half-lives of thuringiensin calculated by the exponential decay model ranged from 2.72 d to 16.19 d under the different conditions mentioned above. Furthermore, an RBF model with accuracy of 0.1 and SPREAD value 5 was employed to model the degradation processes. The results showed that the model could simulate and predict the degradation processes well. Both the half-lives and the prediction data showed that thuringiensin was an easily degradable antibiotic, which could be an important factor in the evaluation of its safety. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. New and practical mathematical model of membrane fouling in an aerobic submerged membrane bioreactor.

    PubMed

    Zuthi, Mst Fazana Rahman; Guo, Wenshan; Ngo, Huu Hao; Nghiem, Duc Long; Hai, Faisal I; Xia, Siqing; Li, Jianxin; Li, Jixiang; Liu, Yi

    2017-08-01

    This study aimed to develop a practical semi-empirical mathematical model of membrane fouling that accounts for cake formation on the membrane and its pore blocking as the major processes of membrane fouling. In the developed model, the concentration of mixed liquor suspended solid is used as a lumped parameter to describe the formation of cake layer including the biofilm. The new model considers the combined effect of aeration and backwash on the foulants' detachment from the membrane. New exponential coefficients are also included in the model to describe the exponential increase of transmembrane pressure that typically occurs after the initial stage of an MBR operation. The model was validated using experimental data obtained from a lab-scale aerobic sponge-submerged membrane bioreactor (MBR), and the simulation of the model agreed well with the experimental findings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Sodium 22+ washout from cultured rat cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kino, M.; Nakamura, A.; Hopp, L.

    1986-10-01

    The washout of Na/sup +/ isotopes from tissues and cells is quite complex and not well defined. To further gain insight into this process, we have studied /sup 22/Na/sup +/ washout from cultured Wistar rat skin fibroblasts and vascular smooth muscle cells (VSMCs). In these preparations, /sup 22/Na/sup +/ washout is described by a general three-exponential function. The exponential factor of the fastest component (k1) and the initial exchange rate constant (kie) of cultured fibroblasts decrease in magnitude in response to incubation in K+-deficient medium or in the presence of ouabain and increase in magnitude when the cells are incubatedmore » in a Ca++-deficient medium. As the magnitude of the kie declines (in the presence of ouabain) to the level of the exponential factor of the middle component (k2), /sup 22/Na/sup +/ washout is adequately described by a two-exponential function. When the kie is further diminished (in the presence of both ouabain and phloretin) to the range of the exponential factor of the slowest component (k3), the washout of /sup 22/Na/sup +/ is apparently monoexponential. Calculations of the cellular Na/sup +/ concentrations, based on the /sup 22/Na/sup +/ activity in the cells at the initiation of the washout experiments, and the medium specific activity agree with atomic absorption spectrometry measurements of the cellular concentration of this ion. Thus, all three components of /sup 22/Na/sup +/ washout from cultured rat cells are of cellular origin. Using the exponential parameters, compartmental analyses of two models (in parallel and in series) with three cellular Na/sup +/ pools were performed. The results indicate that, independent of the model chosen, the relative size of the largest Na+ pool is 92-93% in fibroblasts and approximately 96% in VSMCs. This pool is most likely to represent the cytosol.« less

  5. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  6. Exchange bias training relaxation in spin glass/ferromagnet bilayers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Xiaodan; Du, An; Rui, Wenbin

    2016-04-25

    A canonical spin glass (SG) FeAu layer is fabricated to couple to a soft ferromagnet (FM) FeNi layer. Below the SG freezing temperature, exchange bias (EB) and training are observed. Training in SG/FM bilayers is insensitive to cooling field and may suppress the EB or change the sign of the EB field from negative to positive at specific temperatures, violating from the simple power-law or the single exponential function derived from the antiferromagnet based systems. In view of the SG nature, we employ a double decay model to distinguish the contributions from the SG bulk and the SG/FM interface tomore » training. Dynamical properties during training under different cooling fields and at different temperatures are discussed, and the nonzero shifting coefficient in the time index as a signature of slowing-down decay for SG based systems is interpreted by means of a modified Monte Carlo Metropolis algorithm.« less

  7. Inflation with a graceful exit in a random landscape

    NASA Astrophysics Data System (ADS)

    Pedro, F. G.; Westphal, A.

    2017-03-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  8. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  9. A numerical scheme for singularly perturbed reaction-diffusion problems with a negative shift via numerov method

    NASA Astrophysics Data System (ADS)

    Dinesh Kumar, S.; Nageshwar Rao, R.; Pramod Chakravarthy, P.

    2017-11-01

    In this paper, we consider a boundary value problem for a singularly perturbed delay differential equation of reaction-diffusion type. We construct an exponentially fitted numerical method using Numerov finite difference scheme, which resolves not only the boundary layers but also the interior layers arising from the delay term. An extensive amount of computational work has been carried out to demonstrate the applicability of the proposed method.

  10. Three-dimensional reconstruction of the crystalline lens gradient index distribution from OCT imaging.

    PubMed

    de Castro, Alberto; Ortiz, Sergio; Gambra, Enrique; Siedlecki, Damian; Marcos, Susana

    2010-10-11

    We present an optimization method to retrieve the gradient index (GRIN) distribution of the in-vitro crystalline lens from optical path difference data extracted from OCT images. Three-dimensional OCT images of the crystalline lens are obtained in two orientations (with the anterior surface up and posterior surface up), allowing to obtain the lens geometry. The GRIN reconstruction method is based on a genetic algorithm that searches for the parameters of a 4-variable GRIN model that best fits the distorted posterior surface of the lens. Computer simulations showed that, for noise of 5 μm in the surface elevations, the GRIN is recovered with an accuracy of 0.003 and 0.010 in the refractive indices of the nucleus and surface of the lens, respectively. The method was applied to retrieve three-dimensionally the GRIN of a porcine crystalline lens in vitro. We found a refractive index ranging from 1.362 in the surface to 1.443 in the nucleus of the lens, an axial exponential decay of the GRIN profile of 2.62 and a meridional exponential decay ranging from 3.56 to 5.18. The effect of GRIN on the aberrations of the lens also studied. The estimated spherical aberration of the measured porcine lens was 2.87 μm assuming a homogenous equivalent refractive index, and the presence of GRIN shifted the spherical aberration toward negative values (-0.97 μm), for a 6-mm pupil.

  11. Environmental factors controlling spatial variation in sediment yield in a central Andean mountain area

    NASA Astrophysics Data System (ADS)

    Molina, Armando; Govers, Gerard; Poesen, Jean; Van Hemelryck, Hendrik; De Bièvre, Bert; Vanacker, Veerle

    2008-06-01

    A large spatial variability in sediment yield was observed from small streams in the Ecuadorian Andes. The objective of this study was to analyze the environmental factors controlling these variations in sediment yield in the Paute basin, Ecuador. Sediment yield data were calculated based on sediment volumes accumulated behind checkdams for 37 small catchments. Mean annual specific sediment yield (SSY) shows a large spatial variability and ranges between 26 and 15,100 Mg km - 2 year - 1 . Mean vegetation cover (C, fraction) in the catchment, i.e. the plant cover at or near the surface, exerts a first order control on sediment yield. The fractional vegetation cover alone explains 57% of the observed variance in ln(SSY). The negative exponential relation (SSY = a × e- b C) which was found between vegetation cover and sediment yield at the catchment scale (10 3-10 9 m 2), is very similar to the equations derived from splash, interrill and rill erosion experiments at the plot scale (1-10 3 m 2). This affirms the general character of an exponential decrease of sediment yield with increasing vegetation cover at a wide range of spatial scales, provided the distribution of cover can be considered to be essentially random. Lithology also significantly affects the sediment yield, and explains an additional 23% of the observed variance in ln(SSY). Based on these two catchment parameters, a multiple regression model was built. This empirical regression model already explains more than 75% of the total variance in the mean annual sediment yield. These results highlight the large potential of revegetation programs for controlling sediment yield. They show that a slight increase in the overall fractional vegetation cover of degraded land is likely to have a large effect on sediment production and delivery. Moreover, they point to the importance of detailed surface vegetation data for predicting and modeling sediment production rates.

  12. Determining the turnover time of groundwater systems with the aid of environmental tracers. 1. Models and their applicability

    NASA Astrophysics Data System (ADS)

    Małoszewski, P.; Zuber, A.

    1982-06-01

    Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.

  13. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  14. Water diffusion in silicate glasses: the effect of glass structure

    NASA Astrophysics Data System (ADS)

    Kuroda, M.; Tachibana, S.

    2016-12-01

    Water diffusion in silicate melts (glasses) is one of the main controlling factors of magmatism in a volcanic system. Water diffusivity in silicate glasses depends on its own concentration. However, the mechanism causing those dependences has not been fully understood yet. In order to construct a general model for water diffusion in various silicate glasses, we performed water diffusion experiments in silica glass and proposed a new water diffusion model [Kuroda et al., 2015]. In the model, water diffusivity is controlled by the concentration of both main diffusion species (i.e. molecular water) and diffusion pathways, which are determined by the concentrations of hydroxyl groups and network modifier cations. The model well explains the water diffusivity in various silicate glasses from silica glass to basalt glass. However, pre-exponential factors of water diffusivity in various glasses show five orders of magnitude variations although the pre-exponential factor should ideally represent the jump frequency and the jump distance of molecular water and show a much smaller variation. Here, we attribute the large variation of pre-exponential factors to a glass structure dependence of activation energy for molecular water diffusion. It has been known that the activation energy depends on the water concentration [Nowak and Behrens, 1997]. The concentration of hydroxyls, which cut Si-O-Si network in the glass structure, increases with water concentration, resulting in lowering the activation energy for water diffusion probably due to more fragmented structure. Network modifier cations are likely to play the same role as water. With taking the effect of glass structure into account, we found that the variation of pre-exponential factors of water diffusivity in silicate glasses can be much smaller than the five orders of magnitude, implying that the diffusion of molecular water in silicate glasses is controlled by the same atomic process.

  15. Mathematical modeling of drying of pretreated and untreated pumpkin.

    PubMed

    Tunde-Akintunde, T Y; Ogunlakin, G O

    2013-08-01

    In this study, drying characteristics of pretreated and untreated pumpkin were examined in a hot-air dryer at air temperatures within a range of 40-80 °C and a constant air velocity of 1.5 m/s. The drying was observed to be in the falling-rate drying period and thus liquid diffusion is the main mechanism of moisture movement from the internal regions to the product surface. The experimental drying data for the pumpkin fruits were used to fit Exponential, General exponential, Logarithmic, Page, Midilli-Kucuk and Parabolic model and the statistical validity of models tested were determined by non-linear regression analysis. The Parabolic model had the highest R(2) and lowest χ(2) and RMSE values. This indicates that the Parabolic model is appropriate to describe the dehydration behavior for the pumpkin.

  16. Cosmological models constructed by van der Waals fluid approximation and volumetric expansion

    NASA Astrophysics Data System (ADS)

    Samanta, G. C.; Myrzakulov, R.

    The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.

  17. Modeling steady-state dynamics of macromolecules in exponential-stretching flow using multiscale molecular-dynamics-multiparticle-collision simulations.

    PubMed

    Ghatage, Dhairyasheel; Chatterji, Apratim

    2013-10-01

    We introduce a method to obtain steady-state uniaxial exponential-stretching flow of a fluid (akin to extensional flow) in the incompressible limit, which enables us to study the response of suspended macromolecules to the flow by computer simulations. The flow field in this flow is defined by v(x) = εx, where v(x) is the velocity of the fluid and ε is the stretch flow gradient. To eliminate the effect of confining boundaries, we produce the flow in a channel of uniform square cross section with periodic boundary conditions in directions perpendicular to the flow, but simultaneously maintain uniform density of fluid along the length of the tube. In experiments a perfect elongational flow is obtained only along the axis of symmetry in a four-roll geometry or a filament-stretching rheometer. We can reproduce flow conditions very similar to extensional flow near the axis of symmetry by exponential-stretching flow; we do this by adding the right amounts of fluid along the length of the flow in our simulations. The fluid particles added along the length of the tube are the same fluid particles which exit the channel due to the flow; thus mass conservation is maintained in our model by default. We also suggest a scheme for possible realization of exponential-stretching flow in experiments. To establish our method as a useful tool to study various soft matter systems in extensional flow, we embed (i) spherical colloids with excluded volume interactions (modeled by the Weeks-Chandler potential) as well as (ii) a bead-spring model of star polymers in the fluid to study their responses to the exponential-stretched flow and show that the responses of macromolecules in the two flows are very similar. We demonstrate that the variation of number density of the suspended colloids along the direction of flow is in tune with our expectations. We also conclude from our study of the deformation of star polymers with different numbers of arms f that the critical flow gradient ε(c) at which the star undergoes the coil-to-stretch transition is independent of f for f = 2,5,10, and 20.

  18. Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts

    NASA Technical Reports Server (NTRS)

    Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.

    2007-01-01

    We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.

  19. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  20. Quantum Loop Expansion to High Orders, Extended Borel Summation, and Comparison with Exact Results

    NASA Astrophysics Data System (ADS)

    Noreen, Amna; Olaussen, Kåre

    2013-07-01

    We compare predictions of the quantum loop expansion to (essentially) infinite orders with (essentially) exact results in a simple quantum mechanical model. We find that there are exponentially small corrections to the loop expansion, which cannot be explained by any obvious “instanton”-type corrections. It is not the mathematical occurrence of exponential corrections but their seeming lack of any physical origin which we find surprising and puzzling.

  1. Non-exponential kinetics of unfolding under a constant force.

    PubMed

    Bell, Samuel; Terentjev, Eugene M

    2016-11-14

    We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.

  2. Non-exponential kinetics of unfolding under a constant force

    NASA Astrophysics Data System (ADS)

    Bell, Samuel; Terentjev, Eugene M.

    2016-11-01

    We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.

  3. Spacecraft Solar Particle Event (SPE) Shielding: Shielding Effectiveness as a Function of SPE model as Determined with the FLUKA Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina

    2010-01-01

    Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event

  4. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  5. A Parametric Study of Fine-scale Turbulence Mixing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Freund, Jonathan B.

    2002-01-01

    The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.

  6. Model application niche analysis: Assessing the transferability and generalizability of ecological models

    EPA Science Inventory

    The use of models by ecologist and environmental managers, to inform environmental management and decision-making, has grown exponentially in the past 50 years. Due to logistical, economical and theoretical benefits, model users are frequently transferring preexisting models to n...

  7. Quantitative MRI for hepatic fat fraction and T2* measurement in pediatric patients with non-alcoholic fatty liver disease.

    PubMed

    Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S

    2014-11-01

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than 15-20%. Histological fat fraction measurements in ten patients were highly correlated with calibrated MRI fat fraction measurements (Pearson correlation coefficient r = 0.90 with P = 0.0004). Liver MRI using multi-point Dixon with multi-fat-peak and bi-exponential T2* modeling provided accurate fat quantification in children and young adults with non-alcoholic fatty liver disease and may be used to screen at-risk or affected individuals and to monitor disease progress noninvasively.

  8. The dynamics of charge transfer with and without a barrier: A very simplified model of cyclic voltammetry.

    PubMed

    Ouyang, Wenjun; Subotnik, Joseph E

    2017-05-07

    Using the Anderson-Holstein model, we investigate charge transfer dynamics between a molecule and a metal surface for two extreme cases. (i) With a large barrier, we show that the dynamics follow a single exponential decay as expected; (ii) without any barrier, we show that the dynamics are more complicated. On the one hand, if the metal-molecule coupling is small, single exponential dynamics persist. On the other hand, when the coupling between the metal and the molecule is large, the dynamics follow a biexponential decay. We analyze the dynamics using the Smoluchowski equation, develop a simple model, and explore the consequences of biexponential dynamics for a hypothetical cyclic voltammetry experiment.

  9. First off-time treatment prostate-specific antigen kinetics predicts survival in intermittent androgen deprivation for prostate cancer.

    PubMed

    Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier

    2016-01-01

    Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.

  10. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  11. Essays on the statistical mechanics of the labor market and implications for the distribution of earned income

    NASA Astrophysics Data System (ADS)

    Schneider, Markus P. A.

    This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.

  12. Effective equilibrium picture in the x y model with exponentially correlated noise

    NASA Astrophysics Data System (ADS)

    Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio

    2018-02-01

    We study the effect of exponentially correlated noise on the x y model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ , indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.

  13. Ultra-large distance modification of gravity from Lorentz symmetry breaking at the Planck scale

    NASA Astrophysics Data System (ADS)

    Gorbunov, Dmitry S.; Sibiryakov, Sergei M.

    2005-09-01

    We present an extension of the Randall-Sundrum model in which, due to spontaneous Lorentz symmetry breaking, graviton mixes with bulk vector fields and becomes quasilocalized. The masses of KK modes comprising the four-dimensional graviton are naturally exponentially small. This allows to push the Lorentz breaking scale to as high as a few tenth of the Planck mass. The model does not contain ghosts or tachyons and does not exhibit the van Dam-Veltman-Zakharov discontinuity. The gravitational attraction between static point masses becomes gradually weaker with increasing of separation and gets replaced by repulsion (antigravity) at exponentially large distances.

  14. Effective equilibrium picture in the xy model with exponentially correlated noise.

    PubMed

    Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio

    2018-02-01

    We study the effect of exponentially correlated noise on the xy model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ, indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.

  15. Novel model of negative secondary ion formation and its use to refine the electronegativity of almost fifty elements.

    PubMed

    Wittmaack, Klaus

    2014-06-17

    This study aimed to examine the recently proposed idea that the ionic contribution to atomic bonds is essential in determining the charge state of sputtered atoms. Use was made of negative secondary ion yields reported by Wilson for a large number of elements implanted in silicon and then sputter profiled by Cs bombardment. The derived normalized ion yields (or fractions) P vary by 6 orders of magnitude, but the expected exponential dependence on the electron affinity EA is evident only vaguely. Remarkably, a correlation of similar quality is observed if the data are presented as a function of the ionization potential IP. With IP being the dominant (if not sole) contributor to the electronegativity χ, one is led to assume that P depends on the sum χ + EA. About 72% of the "nonsaturated" ion yields are in accordance with a dependence of the form P ∝ exp[(χ + EA)/ε], with ε ≅ 0.2 eV, provided the appropriate value of χ is selected from the electronegativity tables of Pauling (read in eV), Mulliken or Allen. However, each of the three sources contributes only about one-third to the favorable electronegativity data. This unsatisfactory situation initiated the idea to derive the "true" electronegativity χSIMS from the measured ion yields P(χ + EA), verified for 48 elements. Significant negative deviations of χSIMS from a smooth increase with increasing atomic number are evident for elements with special outer-shell electron configurations such as (n-1)d(g-1)ns(1) or (n-1)d(10)ns(2)np(1). The results strongly support the new model of secondary ion formation and provide means for refining electronegativity data.

  16. Proportional Feedback Control of Energy Intake During Obesity Pharmacotherapy.

    PubMed

    Hall, Kevin D; Sanghvi, Arjun; Göbel, Britta

    2017-12-01

    Obesity pharmacotherapies result in an exponential time course for energy intake whereby large early decreases dissipate over time. This pattern of declining drug efficacy to decrease energy intake results in a weight loss plateau within approximately 1 year. This study aimed to elucidate the physiology underlying the exponential decay of drug effects on energy intake. Placebo-subtracted energy intake time courses were examined during long-term obesity pharmacotherapy trials for 14 different drugs or drug combinations within the theoretical framework of a proportional feedback control system regulating human body weight. Assuming each obesity drug had a relatively constant effect on average energy intake and did not affect other model parameters, our model correctly predicted that long-term placebo-subtracted energy intake was linearly related to early reductions in energy intake according to a prespecified equation with no free parameters. The simple model explained about 70% of the variance between drug studies with respect to the long-term effects on energy intake, although a significant proportional bias was evident. The exponential decay over time of obesity pharmacotherapies to suppress energy intake can be interpreted as a relatively constant effect of each drug superimposed on a physiological feedback control system regulating body weight. © 2017 The Obesity Society.

  17. The Mass-dependent Star Formation Histories of Disk Galaxies: Infall Model Versus Observations

    NASA Astrophysics Data System (ADS)

    Chang, R. X.; Hou, J. L.; Shen, S. Y.; Shu, C. G.

    2010-10-01

    We introduce a simple model to explore the star formation histories of disk galaxies. We assume that the disk originate and grows by continuous gas infall. The gas infall rate is parameterized by the Gaussian formula with one free parameter: the infall-peak time tp . The Kennicutt star formation law is adopted to describe how much cold gas turns into stars. The gas outflow process is also considered in our model. We find that, at a given galactic stellar mass M *, the model adopting a late infall-peak time tp results in blue colors, low-metallicity, high specific star formation rate (SFR), and high gas fraction, while the gas outflow rate mainly influences the gas-phase metallicity and star formation efficiency mainly influences the gas fraction. Motivated by the local observed scaling relations, we "construct" a mass-dependent model by assuming that the low-mass galaxy has a later infall-peak time tp and a larger gas outflow rate than massive systems. It is shown that this model can be in agreement with not only the local observations, but also with the observed correlations between specific SFR and galactic stellar mass SFR/M * ~ M * at intermediate redshifts z < 1. Comparison between the Gaussian-infall model and the exponential-infall model is also presented. It shows that the exponential-infall model predicts a higher SFR at early stage and a lower SFR later than that of Gaussian infall. Our results suggest that the Gaussian infall rate may be more reasonable in describing the gas cooling process than the exponential infall rate, especially for low-mass systems.

  18. Exponential Modelling for Mutual-Cohering of Subband Radar Data

    NASA Astrophysics Data System (ADS)

    Siart, U.; Tejero, S.; Detlefsen, J.

    2005-05-01

    Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.

  19. On the nature of dissipative Timoshenko systems at light of the second spectrum of frequency

    NASA Astrophysics Data System (ADS)

    Almeida Júnior, D. S.; Ramos, A. J. A.

    2017-12-01

    In the present work, we prove that there exists a relation between a physical inconsistence known as second spectrum of frequency or non-physical spectrum and the exponential decay of a dissipative Timoshenko system where the damping mechanism acts on angle rotation. The so-called second spectrum is addressed into stabilization scenario and, in particular, we show that the second spectrum of the classical Timoshenko model can be truncated by taking a damping mechanism. Also, we show that dissipative Timoshenko type systems which are free of the second spectrum [based on important physical and historical observations made by Elishakoff (Advances mathematical modeling and experimental methods for materials and structures, solid mechanics and its applications, Springer, Berlin, pp 249-254, 2010), Elishakoff et al. (ASME Am Soc Mech Eng Appl Mech Rev 67(6):1-11 2015) and Elishakoff et al. (Int J Solids Struct 109:143-151, 2017)] are exponential stable for any values of the coefficients of system. In this direction, we provide physical explanations why weakly dissipative Timoshenko systems decay exponentially according to equality between velocity of wave propagation as proved in pioneering works by Soufyane (C R Acad Sci 328(8):731-734, 1999) and also by Muñoz Rivera and Racke (Discrete Contin Dyn Syst B 9:1625-1639, 2003). Therefore, the second spectrum of the classical Timoshenko beam model plays an important role in explaining some results on exponential decay and our investigations suggest to pay attention to the eventual consequences of this spectrum on stabilization setting for dissipative Timoshenko type systems.

  20. Modelling the interactions between Pseudomonas putida and Escherichia coli O157:H7 in fish-burgers: use of the lag-exponential model and of a combined interaction index.

    PubMed

    Speranza, B; Bevilacqua, A; Mastromatteo, M; Sinigaglia, M; Corbo, M R

    2010-08-01

    The objective of the current study was to examine the interactions between Pseudomonas putida and Escherichia coli O157:H7 in coculture studies on fish-burgers packed in air and under different modified atmospheres (30 : 40 : 30 O(2) : CO(2) : N(2), 5 : 95 O(2) : CO(2) and 50 : 50 O(2) : CO(2)), throughout the storage at 8 degrees C. The lag-exponential model was applied to describe the microbial growth. To give a quantitative measure of the occurring microbial interactions, two simple parameters were developed: the combined interaction index (CII) and the partial interaction index (PII). Under air, the interaction was significant (P < 0.05) only within the exponential growth phase (CII, 1.72), whereas under the modified atmospheres, the interactions were highly significant (P < 0.001) and occurred both in the exponential and in the stationary phase (CII ranged from 0.33 to 1.18). PII values for E. coli O157:H7 were lower than those calculated for Ps. putida. The interactions occurring into the system affected both E. coli O157:H7 and pseudomonads subpopulations. The packaging atmosphere resulted in a key element. The article provides some useful information on the interactions occurring between E. coli O157:H7 and Ps. putida on fish-burgers. The proposed index describes successfully the competitive growth of both micro-organisms, giving also a quantitative measure of a qualitative phenomenon.

  1. Introducing correlations into carrier transport simulations of disordered materials through seeded nucleation: impact on density of states, carrier mobility, and carrier statistics

    NASA Astrophysics Data System (ADS)

    Brown, J. S.; Shaheen, S. E.

    2018-04-01

    Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.

  2. Introducing correlations into carrier transport simulations of disordered materials through seeded nucleation: impact on density of states, carrier mobility, and carrier statistics.

    PubMed

    Brown, J S; Shaheen, S E

    2018-04-04

    Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.

  3. Competitive regulation of plant allometry and a generalized model for the plant self-thinning process.

    PubMed

    Wang, Gang; Yuan, Jianli; Wang, Xizhi; Xiao, Sa; Huang, Wenbing

    2004-11-01

    Taking into account the individual growth form (allometry) in a plant population and the effects of intraspecific competition on allometry under the population self-thinning condition, and adopting Ogawa's allometric equation 1/y = 1/axb + 1/c as the expression of complex allometry, the generalized model describing the change mode of r (the self-thinning exponential in the self-thinning equation, log M = K + log N, where M is mean plant mass, K is constant, and N is population density) was constructed. Meanwhile, with reference to the changing process of population density to survival curve type B, the exponential, r, was calculated using the software MATHEMATICA 4.0. The results of the numerical simulation show that (1) the value of the self-thinning exponential, r, is mainly determined by allometric parameters; it is most sensitive to change of b of the three allometric parameters, and a and c take second place; (2) the exponential, r, changes continuously from about -3 to the asymptote -1; the slope of -3/2 is a transient value in the population self-thinning process; (3) it is not a 'law' that the slope of the self-thinning trajectory equals or approaches -3/2, and the long-running dispute in ecological research over whether or not the exponential, r, equals -3/2 is meaningless. So future studies on the plant self-thinning process should focus on investigating how plant neighbor competition affects the phenotypic plasticity of plant individuals, what the relationship between the allometry mode and the self-thinning trajectory of plant population is and, in the light of evolution, how plants have adapted to competition pressure by plastic individual growth.

  4. Feasibility of quasi-random band model in evaluating atmospheric radiance

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Mirakhur, N.

    1980-01-01

    The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.

  5. Parameterization guidelines and considerations for hydrologic models

    Treesearch

     R. W. Malone; G. Yagow; C. Baffaut; M.W  Gitau; Z. Qi; Devendra Amatya; P.B.   Parajuli; J.V. Bonta; T.R.  Green

    2015-01-01

     Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...

  6. Cellular automata model for use with real freeway data

    DOT National Transportation Integrated Search

    2002-01-01

    The exponential rate of increase in freeway traffic is expanding the need for accurate and : realistic methods to model and predict traffic flow. Traffic modeling and simulation facilitates an : examination of both microscopic and macroscopic views o...

  7. Negative-hydrogen-ion production from a nanoporous 12CaO • 7Al2O3 electride surface

    NASA Astrophysics Data System (ADS)

    Sasao, Mamiko; Moussaoui, Roba; Kogut, Dmitry; Ellis, James; Cartry, Gilles; Wada, Motoi; Tsumori, Katsuyoshi; Hosono, Hideo

    2018-06-01

    A high production rate of negative hydrogen ions (H‑) was observed from a nanoporous 12CaO • 7Al2O3 (C12A7) electride surface immersed in hydrogen/deuterium low-pressure plasmas. The target was negatively biased at 20–130 V, and the target surface was bombarded by H3 + ions from the plasma. The production rate was compared with that from a clean molybdenum surface. Using the pseudo-exponential work-function dependence of the H‑ production rate, the total H‑ yield from the C12A7 electride surface bombarded at 80 V was evaluated to be 25% of that from a cesiated molybdenum surface with the lowest work-function. The measured H‑ energy spectrum indicates that the major production mechanism is desorption by sputtering. This material has potential to be used as a production surface of cesium-free negative ion sources for accelerators, heating beams in nuclear fusion, and surface modification for industrial applications.

  8. A neural-network-based exponential H∞ synchronisation for chaotic secure communication via improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hsiao, Feng-Hsiag

    2016-10-01

    In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.

  9. Growth rate for blackhole instabilities

    NASA Astrophysics Data System (ADS)

    Prabhu, Kartik; Wald, Robert

    2015-04-01

    Hollands and Wald showed that dynamic stability of stationary axisymmetric black holes is equivalent to positivity of canonical energy on a space of linearised axisymmetric perturbations satisfying certain boundary and gauge conditions. Using a reflection isometry of the background, we split the energy into kinetic and potential parts. We show that the kinetic energy is positive. In the case that potential energy is negative, we show existence of exponentially growing perturbations and further obtain a variational formula for the growth rate.

  10. Global effects of local human population density and distance to markets on the condition of coral reef fisheries.

    PubMed

    Cinner, Joshua E; Graham, Nicholas A J; Huchery, Cindy; Macneil, M Aaron

    2013-06-01

    Coral reef fisheries support the livelihoods of millions of people but have been severely and negatively affected by anthropogenic activities. We conducted a systematic review of published data on the biomass of coral reef fishes to explore how the condition of reef fisheries is related to the density of local human populations, proximity of the reef to markets, and key environmental variables (including broad geomorphologic reef type, reef area, and net productivity). When only population density and environmental covariates were considered, high variability in fisheries conditions at low human population densities resulted in relatively weak explanatory models. The presence or absence of human settlements, habitat type, and distance to fish markets provided a much stronger explanatory model for the condition of reef fisheries. Fish biomass remained relatively low within 14 km of markets, then biomass increased exponentially as distance from reefs to markets increased. Our results suggest the need for an increased science and policy focus on markets as both a key driver of the condition of reef fisheries and a potential source of solutions. © 2012 Society for Conservation Biology.

  11. Physicochemical characterization of Capstone depleted uranium aerosols IV: in vitro solubility analysis.

    PubMed

    Guilmette, Raymond A; Cheng, Yung Sung

    2009-03-01

    As part of the Capstone Depleted Uranium (DU) Aerosol Study, the solubility of selected aerosol samples was measured using an accepted in vitro dissolution test system. This static system was employed along with a SUF (synthetic ultrafiltrate) solvent, which is designed to mimic the physiological chemistry of extracellular fluid. Using sequentially obtained solvent samples, the dissolution behavior over a 46-d test period was evaluated by fitting the measurement data to two- or three-component negative exponential functions. These functions were then compared with Type M and S absorption taken from the International Commission on Radiological Protection Publication 66 Human Respiratory Tract Model. The results indicated that there was a substantial variability in solubility of the aerosols, which in part depended on the type of armor being impacted by the DU penetrator and the particle size fraction being tested. Although some trends were suggested, the variability noted leads to uncertainties in predicting the solubility of other DU-based aerosols. Nevertheless, these data provide a useful experimental basis for modeling the intake-dose relationships for inhaled DU aerosols arising from penetrator impact on armored vehicles.

  12. Efficiency Analysis of Waveform Shape for Electrical Excitation of Nerve Fibers

    PubMed Central

    Wongsarnpigoon, Amorn; Woock, John P.; Grill, Warren M.

    2011-01-01

    Stimulation efficiency is an important consideration in the stimulation parameters of implantable neural stimulators. The objective of this study was to analyze the effects of waveform shape and duration on the charge, power, and energy efficiency of neural stimulation. Using a population model of mammalian axons and in vivo experiments on cat sciatic nerve, we analyzed the stimulation efficiency of four waveform shapes: square, rising exponential, decaying exponential, and rising ramp. No waveform was simultaneously energy-, charge-, and power-optimal, and differences in efficiency among waveform shapes varied with pulse width (PW) For short PWs (≤ 0.1 ms), square waveforms were no less energy-efficient than exponential waveforms, and the most charge-efficient shape was the ramp. For long PWs (≥0.5 ms), the square was the least energy-efficient and charge-efficient shape, but across most PWs, the square was the most power-efficient shape. Rising exponentials provided no practical gains in efficiency over the other shapes, and our results refute previous claims that the rising exponential is the energy-optimal shape. An improved understanding of how stimulation parameters affect stimulation efficiency will help improve the design and programming of implantable stimulators to minimize tissue damage and extend battery life. PMID:20388602

  13. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar.

    PubMed

    Li, Zhan; Jupp, David L B; Strahler, Alan H; Schaaf, Crystal B; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S; Chakrabarti, Supriya; Cook, Timothy A; Paynter, Ian; Saenz, Edward J; Schaefer, Michael

    2016-03-02

    Radiometric calibration of the Dual-Wavelength Echidna(®) Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρ(app)), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρ(app) are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρ(app) error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρ(app) from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars.

  14. Complexity analysis based on generalized deviation for financial markets

    NASA Astrophysics Data System (ADS)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  15. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar

    PubMed Central

    Li, Zhan; Jupp, David L. B.; Strahler, Alan H.; Schaaf, Crystal B.; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S.; Chakrabarti, Supriya; Cook, Timothy A.; Paynter, Ian; Saenz, Edward J.; Schaefer, Michael

    2016-01-01

    Radiometric calibration of the Dual-Wavelength Echidna® Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρapp), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρapp are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρapp error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρapp from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars. PMID:26950126

  16. Comparison of stretched-Exponential and monoexponential model diffusion-Weighted imaging in prostate cancer and normal tissues.

    PubMed

    Liu, Xiaohang; Zhou, Liangping; Peng, Weijun; Wang, He; Zhang, Yong

    2015-10-01

    To compare stretched-exponential and monoexponential model diffusion-weighted imaging (DWI) in prostate cancer and normal tissues. Twenty-seven patients with prostate cancer underwent DWI exam using b-values of 0, 500, 1000, and 2000 s/mm(2) . The distributed diffusion coefficients (DDC) and α values of prostate cancer and normal tissues were obtained with stretched-exponential model and apparent diffusion coefficient (ADC) values using monoexponential model. The ADC, DDC (both in 10(-3) mm(2)/s), and α values (range, 0-1) were compared among different prostate tissues. The ADC and DDC were also compared and correlated in each tissue, and the standardized differences between DDC and ADC were compared among different tissues. Data were obtained for 31 cancers, 36 normal peripheral zone (PZ) and 26 normal central gland (CG) tissues. The ADC (0.71 ± 0.12), DDC (0.60 ± 0.18), and α value (0.64 ± 0.05) of tumor were all significantly lower than those of the normal PZ (1.41 ± 0.22, 1.47 ± 0.20, and 0.85 ± 0.09) and CG (1.25 ± 0.14, 1.32 ± 0.13, and 0.82 ± 0.06) (all P < 0.05). ADC was significantly higher than DDC in cancer, but lower than DDC in the PZ and CG (all P < 0.05). The ADC and DDC were strongly correlated (R(2)  = 0.99, 0.98, 0.99, respectively, all P < 0.05) in all the tissue, and standardized difference between ADC and DDC of cancer was slight but significantly higher than that in normal tissue. The stretched-exponential model DWI provides more parameters for distinguishing prostate cancer and normal tissue and reveals slight differences between DDC and ADC values. © 2015 Wiley Periodicals, Inc.

  17. Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets

    PubMed Central

    Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda

    2013-01-01

    Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626

  18. Reliability analysis using an exponential power model with bathtub-shaped failure rate function: a Bayes study.

    PubMed

    Shehla, Romana; Khan, Athar Ali

    2016-01-01

    Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.

  19. Proportional exponentiated link transformed hazards (ELTH) models for discrete time survival data with application

    PubMed Central

    Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook

    2015-01-01

    Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374

  20. Clinical and MRI activity as determinants of sample size for pediatric multiple sclerosis trials

    PubMed Central

    Verhey, Leonard H.; Signori, Alessio; Arnold, Douglas L.; Bar-Or, Amit; Sadovnick, A. Dessa; Marrie, Ruth Ann; Banwell, Brenda

    2013-01-01

    Objective: To estimate sample sizes for pediatric multiple sclerosis (MS) trials using new T2 lesion count, annualized relapse rate (ARR), and time to first relapse (TTFR) endpoints. Methods: Poisson and negative binomial models were fit to new T2 lesion and relapse count data, and negative binomial time-to-event and exponential models were fit to TTFR data of 42 children with MS enrolled in a national prospective cohort study. Simulations were performed by resampling from the best-fitting model of new T2 lesion count, number of relapses, or TTFR, under various assumptions of the effect size, trial duration, and model parameters. Results: Assuming a 50% reduction in new T2 lesions over 6 months, 90 patients/arm are required, whereas 165 patients/arm are required for a 40% treatment effect. Sample sizes for 2-year trials using relapse-related endpoints are lower than that for 1-year trials. For 2-year trials and a conservative assumption of overdispersion (ϑ), sample sizes range from 70 patients/arm (using ARR) to 105 patients/arm (TTFR) for a 50% reduction in relapses, and 230 patients/arm (ARR) to 365 patients/arm (TTFR) for a 30% relapse reduction. Assuming a less conservative ϑ, 2-year trials using ARR require 45 patients/arm (60 patients/arm for TTFR) for a 50% reduction in relapses and 145 patients/arm (200 patients/arm for TTFR) for a 30% reduction. Conclusion: Six-month phase II trials using new T2 lesion count as an endpoint are feasible in the pediatric MS population; however, trials powered on ARR or TTFR will need to be 2 years in duration and will require multicentered collaboration. PMID:23966255

  1. Modeling T1 and T2 relaxation in bovine white matter

    NASA Astrophysics Data System (ADS)

    Barta, R.; Kalantari, S.; Laule, C.; Vavasour, I. M.; MacKay, A. L.; Michal, C. A.

    2015-10-01

    The fundamental basis of T1 and T2 contrast in brain MRI is not well understood; recent literature contains conflicting views on the nature of relaxation in white matter (WM). We investigated the effects of inversion pulse bandwidth on measurements of T1 and T2 in WM. Hybrid inversion-recovery/Carr-Purcell-Meiboom-Gill experiments with broad or narrow bandwidth inversion pulses were applied to bovine WM in vitro. Data were analysed with the commonly used 1D-non-negative least squares (NNLS) algorithm, a 2D-NNLS algorithm, and a four-pool model which was based upon microscopically distinguishable WM compartments (myelin non-aqueous protons, myelin water, non-myelin non-aqueous protons and intra/extracellular water) and incorporated magnetization exchange between adjacent compartments. 1D-NNLS showed that different T2 components had different T1 behaviours and yielded dissimilar results for the two inversion conditions. 2D-NNLS revealed significantly more complicated T1/T2 distributions for narrow bandwidth than for broad bandwidth inversion pulses. The four-pool model fits allow physical interpretation of the parameters, fit better than the NNLS techniques, and fits results from both inversion conditions using the same parameters. The results demonstrate that exchange cannot be neglected when analysing experimental inversion recovery data from WM, in part because it can introduce exponential components having negative amplitude coefficients that cannot be correctly modeled with nonnegative fitting techniques. While assignment of an individual T1 to one particular pool is not possible, the results suggest that under carefully controlled experimental conditions the amplitude of an apparent short T1 component might be used to quantify myelin water.

  2. Theoretical and Experimental Study of Bacterial Colony Growth in 3D

    NASA Astrophysics Data System (ADS)

    Shao, Xinxian; Mugler, Andrew; Nemenman, Ilya

    2014-03-01

    Bacterial cells growing in liquid culture have been well studied and modeled. However, in nature, bacteria often grow as biofilms or colonies in physically structured habitats. A comprehensive model for population growth in such conditions has not yet been developed. Based on the well-established theory for bacterial growth in liquid culture, we develop a model for colony growth in 3D in which a homogeneous colony of cells locally consume a diffusing nutrient. We predict that colony growth is initially exponential, as in liquid culture, but quickly slows to sub-exponential after nutrient is locally depleted. This prediction is consistent with our experiments performed with E. coli in soft agar. Our model provides a baseline to which studies of complex growth process, such as such as spatially and phenotypically heterogeneous colonies, must be compared.

  3. Transient modeling in simulation of hospital operations for emergency response.

    PubMed

    Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li

    2006-01-01

    Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.

  4. Determinants of self-rated health of Warsaw inhabitants.

    PubMed

    Supranowicz, Piotr; Wysocki, Mirosław J; Car, Justyna; Debska, Anna; Gebska-Kuczerowska, Anita

    2012-01-01

    Self-rated health is a one-point measure commonly used for recognising subjectively perceived health and covering a wide range of individual's health aspects. The aim of our study was to examine the extent to which self-rated health reflects the differences due to demographic characteristics, physical, psychical and social well-being, health disorders, occurrence of chronic disease and negative life events in Polish social and cultural conditions. Data were collected by non-addressed questionnaire methods from 402 Warsaw inhabitants. The questionnaire contained the questions concerning self-rated health, physical, psychical and social well-being, the use of health care services, occurrence of chronic disease and contact with negative life events. The analysis showed that worse self-rated health increased exponentially with age and less sharply with lower level of education. Pensioners were more likely to assess their own health worse then employed or students. Such difference was not found for unemployed. Compared to married, the self-rated health of divorced or widowed respondents was lower. Gender does not differentiate self-rated health. In regard to well-being, self-rated health linearly decreased for physical well-being, for social and, especially, for psychical well-being the differences were significant, but more complicated. Hospitalisation, especially repeated, strongly determined worse self-rated health. In contrast, relationship between self-rated health and sickness absence or frequency of contact with physician were lower. Chronic diseases substantially increased the risk of poorer self-rated health, and their co-morbidity increased the risk exponentially. The patients with cancer were the group, in which the risk several times exceeded that reported for the patients of other diseases. Regarding negative life events, only experience with violence and financial difficulties were resulted in worse self-rated health. Our findings confirmed the usefulness of self-rated health for public health research.

  5. Integrated research in constitutive modelling at elevated temperatures, part 2

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.; Allen, D. H.

    1986-01-01

    Four current viscoplastic models are compared experimentally with Inconel 718 at 1100 F. A series of tests were performed to create a sufficient data base from which to evaluate material constants. The models used include Bodner's anisotropic model; Krieg, Swearengen, and Rhode's model; Schmidt and Miller's model; and Walker's exponential model.

  6. Modeling Population Growth and Extinction

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2009-01-01

    The exponential growth model and the logistic model typically introduced in the mathematics curriculum presume that a population grows exclusively. In reality, species can also die out and more sophisticated models that take the possibility of extinction into account are needed. In this article, two extensions of the logistic model are considered,…

  7. Exponential Thurston maps and limits of quadratic differentials

    NASA Astrophysics Data System (ADS)

    Hubbard, John; Schleicher, Dierk; Shishikura, Mitsuhiro

    2009-01-01

    We give a topological characterization of postsingularly finite topological exponential maps, i.e., universal covers g\\colon{C}to{C}setminus\\{0\\} such that 0 has a finite orbit. Such a map either is Thurston equivalent to a unique holomorphic exponential map λ e^z or it has a topological obstruction called a degenerate Levy cycle. This is the first analog of Thurston's topological characterization theorem of rational maps, as published by Douady and Hubbard, for the case of infinite degree. One main tool is a theorem about the distribution of mass of an integrable quadratic differential with a given number of poles, providing an almost compact space of models for the entire mass of quadratic differentials. This theorem is given for arbitrary Riemann surfaces of finite type in a uniform way.

  8. Photoluminescence study of MBE grown InGaN with intentional indium segregation

    NASA Astrophysics Data System (ADS)

    Cheung, Maurice C.; Namkoong, Gon; Chen, Fei; Furis, Madalina; Pudavar, Haridas E.; Cartwright, Alexander N.; Doolittle, W. Alan

    2005-05-01

    Proper control of MBE growth conditions has yielded an In0.13Ga0.87N thin film sample with emission consistent with In-segregation. The photoluminescence (PL) from this epilayer showed multiple emission components. Moreover, temperature and power dependent studies of the PL demonstrated that two of the components were excitonic in nature and consistent with indium phase separation. At 15 K, time resolved PL showed a non-exponential PL decay that was well fitted with the stretched exponential solution expected for disordered systems. Consistent with the assumed carrier hopping mechanism of this model, the effective lifetime, , and the stretched exponential parameter, , decrease with increasing emission energy. Finally, room temperature micro-PL using a confocal microscope showed spatial clustering of low energy emission.

  9. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  10. Chronology of Postglacial Eruptive Activity and Calculation of Eruption Probabilities for Medicine Lake Volcano, Northern California

    USGS Publications Warehouse

    Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.

    2007-01-01

    Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.

  11. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  12. Scalar-fluid interacting dark energy: Cosmological dynamics beyond the exponential potential

    NASA Astrophysics Data System (ADS)

    Dutta, Jibitesh; Khyllep, Wompherdeiki; Tamanini, Nicola

    2017-01-01

    We extend the dynamical systems analysis of scalar-fluid interacting dark energy models performed in C. G. Boehmer et al., Phys. Rev. D 91, 123002 (2015), 10.1103/PhysRevD.91.123002 by considering scalar field potentials beyond the exponential type. The properties and stability of critical points are examined using a combination of linear analysis, computational methods and advanced mathematical techniques, such as center manifold theory. We show that the interesting results obtained with an exponential potential can generally be recovered also for more complicated scalar field potentials. In particular, employing power law and hyperbolic potentials as examples, we find late time accelerated attractors, transitions from dark matter to dark energy domination with specific distinguishing features, and accelerated scaling solutions capable of solving the cosmic coincidence problem.

  13. Discounting of reward sequences: a test of competing formal models of hyperbolic discounting

    PubMed Central

    Zarr, Noah; Alexander, William H.; Brown, Joshua W.

    2014-01-01

    Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. PMID:24639662

  14. When growth models are not universal: evidence from marine invertebrates

    PubMed Central

    Hirst, Andrew G.; Forster, Jack

    2013-01-01

    The accumulation of body mass, as growth, is fundamental to all organisms. Being able to understand which model(s) best describe this growth trajectory, both empirically and ultimately mechanistically, is an important challenge. A variety of equations have been proposed to describe growth during ontogeny. Recently, the West Brown Enquist (WBE) equation, formulated as part of the metabolic theory of ecology, has been proposed as a universal model of growth. This equation has the advantage of having a biological basis, but its ability to describe invertebrate growth patterns has not been well tested against other, more simple models. In this study, we collected data for 58 species of marine invertebrate from 15 different taxa. The data were fitted to three growth models (power, exponential and WBE), and their abilities were examined using an information theoretic approach. Using Akaike information criteria, we found changes in mass through time to fit an exponential equation form best (in approx. 73% of cases). The WBE model predominantly overestimates body size in early ontogeny and underestimates it in later ontogeny; it was the best fit in approximately 14% of cases. The exponential model described growth well in nine taxa, whereas the WBE described growth well in one of the 15 taxa, the Amphipoda. Although the WBE has the advantage of being developed with an underlying proximate mechanism, it provides a poor fit to the majority of marine invertebrates examined here, including species with determinate and indeterminate growth types. In the original formulation of the WBE model, it was tested almost exclusively against vertebrates, to which it fitted well; the model does not however appear to be universal given its poor ability to describe growth in benthic or pelagic marine invertebrates. PMID:23945691

  15. Large and small-scale structures and the dust energy balance problem in spiral galaxies

    NASA Astrophysics Data System (ADS)

    Saftly, W.; Baes, M.; De Geyter, G.; Camps, P.; Renaud, F.; Guedes, J.; De Looze, I.

    2015-04-01

    The interstellar dust content in galaxies can be traced in extinction at optical wavelengths, or in emission in the far-infrared. Several studies have found that radiative transfer models that successfully explain the optical extinction in edge-on spiral galaxies generally underestimate the observed FIR/submm fluxes by a factor of about three. In order to investigate this so-called dust energy balance problem, we use two Milky Way-like galaxies produced by high-resolution hydrodynamical simulations. We create mock optical edge-on views of these simulated galaxies (using the radiative transfer code SKIRT), and we then fit the parameters of a basic spiral galaxy model to these images (using the fitting code FitSKIRT). The basic model includes smooth axisymmetric distributions along a Sérsic bulge and exponential disc for the stars, and a second exponential disc for the dust. We find that the dust mass recovered by the fitted models is about three times smaller than the known dust mass of the hydrodynamical input models. This factor is in agreement with previous energy balance studies of real edge-on spiral galaxies. On the other hand, fitting the same basic model to less complex input models (e.g. a smooth exponential disc with a spiral perturbation or with random clumps), does recover the dust mass of the input model almost perfectly. Thus it seems that the complex asymmetries and the inhomogeneous structure of real and hydrodynamically simulated galaxies are a lot more efficient at hiding dust than the rather contrived geometries in typical quasi-analytical models. This effect may help explain the discrepancy between the dust emission predicted by radiative transfer models and the observed emission in energy balance studies for edge-on spiral galaxies.

  16. Separability of spatiotemporal spectra of image sequences. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Eckert, Michael P.; Buchsbaum, Gershon; Watson, Andrew B.

    1992-01-01

    The spatiotemporal power spectrum was calculated of 14 image sequences in order to determine the degree to which the spectra are separable in space and time, and to assess the validity of the commonly used exponential correlation model found in the literature. The spectrum was expanded by a Singular Value Decomposition into a sum of separable terms and an index was defined of spatiotemporal separability as the fraction of the signal energy that can be represented by the first (largest) separable term. All spectra were found to be highly separable with an index of separability above 0.98. The power spectra of the sequences were well fit by a separable model. The power spectrum model corresponds to a product of exponential autocorrelation functions separable in space and time.

  17. Single-exponential activation behavior behind the super-Arrhenius relaxations in glass-forming liquids.

    PubMed

    Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg

    2010-11-17

    The reported relaxation time for several typical glass-forming liquids was analyzed by using a kinetic model for liquids which invoked a new kind of atomic cooperativity--thermodynamic cooperativity. The broadly studied 'cooperative length' was recognized as the kinetic cooperativity. Both cooperativities were conveniently quantified from the measured relaxation data. A single-exponential activation behavior was uncovered behind the super-Arrhenius relaxations for the liquids investigated. Hence the mesostructure of these liquids and the atomic mechanism of the glass transition became clearer.

  18. Stretched exponentials and power laws in granular avalanching

    NASA Astrophysics Data System (ADS)

    Head, D. A.; Rodgers, G. J.

    1999-02-01

    We introduce a model for granular surface flow which exhibits both stretched exponential and power law avalanching over its parameter range. Two modes of transport are incorporated, a rolling layer consisting of individual particles and the overdamped, sliding motion of particle clusters. The crossover in behaviour observed in experiments on piles of rice is attributed to a change in the dominant mode of transport. We predict that power law avalanching will be observed whenever surface flow is dominated by clustered motion.

  19. Mechanism of light-induced domain nucleation in LiNbO 3 crystals

    NASA Astrophysics Data System (ADS)

    Liu, De'an; Zhi, Ya'nan; Luan, Zhu; Yan, Aimin; Liu, Liren

    2007-09-01

    In this paper, within the spectrum range from 351 nm to 799 nm, the different reductions of nucleation field induced by the focused continuous irradiation with different light intensity are achieved in congruent LiNbO 3 crystals. The reduction proportion increases exponentially with decreasing the irradiation wavelength, and decreases exponentially with increasing the irradiation wavelength. Basing on photo-excited effect, we propose a proper model to explain the mechanism of light-induced domain nucleation in congruent LiNbO 3 crystals.

  20. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kupavskii, A B; Raigorodskii, A M

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3more » (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.« less

  1. Protein synthesis and the recovery of both survival and cytoplasmic "petite" mutation in ultraviolet-treated yeast cells. I. Nuclear-directed protein synthesis.

    PubMed

    Heude, M; Chanet, R; Moustacchi, E

    1975-04-01

    The contribution of nuclear-directed protein synthesis in the repair of lethal and mitochondrial genetic damage after UV-irradiation of exponential and stationary phage haploid yeast cells was examined. This was carried out using cycloheximide (CH), a specific inhibitor of nuclear protein synthesis. It appears that nuclear protein synthesis is required for the increase in survival seen after the liquid holding of cells at both stages, as well as for the "petite" recovery seen after the liquid holding of exponential phase cells. The characteristic negative liquid holding effect observed for the UV induction of "petites" in stationary phase cells (increase of the frequency of "petites" during storage) remained following all the treatments which inhibited nuclear protein synthesis. However, the application of photoreactivating light following dark holding with cycloheximide indicates that some steps of the repair of both nuclear and mitochondrial damage are performed in the absence of a synthesis of proteins.

  2. Convex foundations for generalized MaxEnt models

    NASA Astrophysics Data System (ADS)

    Frongillo, Rafael; Reid, Mark D.

    2014-12-01

    We present an approach to maximum entropy models that highlights the convex geometry and duality of generalized exponential families (GEFs) and their connection to Bregman divergences. Using our framework, we are able to resolve a puzzling aspect of the bijection of Banerjee and coauthors between classical exponential families and what they call regular Bregman divergences. Their regularity condition rules out all but Bregman divergences generated from log-convex generators. We recover their bijection and show that a much broader class of divergences correspond to GEFs via two key observations: 1) Like classical exponential families, GEFs have a "cumulant" C whose subdifferential contains the mean: Eo˜pθ[φ(o)]∈∂C(θ) ; 2) Generalized relative entropy is a C-Bregman divergence between parameters: DF(pθ,pθ')= D C(θ,θ') , where DF becomes the KL divergence for F = -H. We also show that every incomplete market with cost function C can be expressed as a complete market, where the prices are constrained to be a GEF with cumulant C. This provides an entirely new interpretation of prediction markets, relating their design back to the principle of maximum entropy.

  3. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  4. Analysis of two production inventory systems with buffer, retrials and different production rates

    NASA Astrophysics Data System (ADS)

    Jose, K. P.; Nair, Salini S.

    2017-09-01

    This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.

  5. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  6. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  7. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  8. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  9. A modified exponential behavioral economic demand model to better describe consumption data.

    PubMed

    Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K

    2015-12-01

    Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  10. Exploiting fast detectors to enter a new dimension in room-temperature crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Robin L., E-mail: robin.owen@diamond.ac.uk; Paterson, Neil; Axford, Danny

    2014-05-01

    A departure from a linear or an exponential decay in the diffracting power of macromolecular crystals is observed and accounted for through consideration of a multi-state sequential model. A departure from a linear or an exponential intensity decay in the diffracting power of protein crystals as a function of absorbed dose is reported. The observation of a lag phase raises the possibility of collecting significantly more data from crystals held at room temperature before an intolerable intensity decay is reached. A simple model accounting for the form of the intensity decay is reintroduced and is applied for the first timemore » to high frame-rate room-temperature data collection.« less

  11. Deformed exponentials and portfolio selection

    NASA Astrophysics Data System (ADS)

    Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro

    In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.

  12. Modeling and Simulation of Capacitance-Voltage Characteristics of a Nitride GaAs Schottky Diode

    NASA Astrophysics Data System (ADS)

    Ziane, Abderrezzaq; Amrani, Mohammed; Benamara, Zineb; Rabehi, Abdelaziz

    2018-06-01

    A nitride GaAs Schottky diode has been fabricated by the nitridation of GaAs substrates using a radio frequency discharge nitrogen plasma source with a layer thickness of approximately 0.7 nm of GaN. The capacitance-voltage (C-V) characteristics of the Au/GaN/GaAs structure were investigated at room temperature for different frequencies, ranging from 1 kHz to 1 MHz. The C-V measurements for the Au/GaN/GaAs Schottky diode were found to be strongly dependent on the bias voltage and the frequency. The capacitance curves depict an anomalous peak and a negative capacitance phenomenon, indicating the presence of continuous interface state density behavior. A numerical drift-diffusion model based on the Scharfetter-Gummel algorithm was elaborated to solve a system composed of the Poisson and continuities equations. In this model, we take into account the continuous interface state density, and we have considered exponential and Gaussian distributions of trap states in the band gap. The effects of the GaAs doping concentration and the trap state density are discussed. We deduce the shape and values of the trap states, then we validate the developed model by fitting the computed C-V curves with experimental measurements at low frequency.

  13. Is the Milky Way's hot halo convectively unstable?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henley, David B.; Shelton, Robin L., E-mail: dbh@physast.uga.edu

    2014-03-20

    We investigate the convective stability of two popular types of model of the gas distribution in the hot Galactic halo. We first consider models in which the halo density and temperature decrease exponentially with height above the disk. These halo models were created to account for the fact that, on some sight lines, the halo's X-ray emission lines and absorption lines yield different temperatures, implying that the halo is non-isothermal. We show that the hot gas in these exponential models is convectively unstable if γ < 3/2, where γ is the ratio of the temperature and density scale heights. Usingmore » published measurements of γ and its uncertainty, we use Bayes' theorem to infer posterior probability distributions for γ, and hence the probability that the halo is convectively unstable for different sight lines. We find that, if these exponential models are good descriptions of the hot halo gas, at least in the first few kiloparsecs from the plane, the hot halo is reasonably likely to be convectively unstable on two of the three sight lines for which scale height information is available. We also consider more extended models of the halo. While isothermal halo models are convectively stable if the density decreases with distance from the Galaxy, a model of an extended adiabatic halo in hydrostatic equilibrium with the Galaxy's dark matter is on the boundary between stability and instability. However, we find that radiative cooling may perturb this model in the direction of convective instability. If the Galactic halo is indeed convectively unstable, this would argue in favor of supernova activity in the Galactic disk contributing to the heating of the hot halo gas.« less

  14. Heterogeneous characters modeling of instant message services users’ online behavior

    PubMed Central

    Fang, Yajun; Horn, Berthold

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327

  15. Heterogeneous characters modeling of instant message services users' online behavior.

    PubMed

    Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.

  16. [Risk assessment of carcinogenic and non-carcinogenic effects in the use of food].

    PubMed

    Frolova, O A; Karpova, M V

    2012-01-01

    Application of methodology for assessing the risk of diseases associated with consumption of contaminated foods, is aimed at predicting possible changes in the future and helps to create a framework for the prevention of negative effects on public health. The purpose of the study is assessment of health risks formed under the influence of chemical contaminants that pollute the food. Exponential average daily dose of receipt of chemicals in the body, non-carcinogenic and carcinogenic risks were calculated.

  17. STOCK Mechanics:. a General Theory and Method of Energy Conservation with Applications on Djia

    NASA Astrophysics Data System (ADS)

    Tuncay, Çağlar

    A new method, based on the original theory of conservation of sum of kinetic and potential energy defined for prices is proposed and applied on the Dow Jones Industrials Average (DJIA). The general trends averaged over months or years gave a roughly conserved total energy, with three different potential energies, i.e., positive definite quadratic, negative definite quadratic and linear potential energy for exponential rises (and falls), sinusoidal oscillations and parabolic trajectories, respectively. Corresponding expressions for force (impact) are also given.

  18. Teaching Population Ecology Modeling by Means of the Hewlett-Packard 9100A.

    ERIC Educational Resources Information Center

    Tuinstra, Kenneth E.

    The incorporation of mathematical modeling experiences into an undergraduate biology course is described. Detailed expositions of three models used to teach concepts of population ecology are presented, including introductions to major concepts, user instructions, trial data and problem sets. The models described are: 1) an exponential/logistic…

  19. Stimulation Efficiency With Decaying Exponential Waveforms in a Wirelessly Powered Switched-Capacitor Discharge Stimulation System.

    PubMed

    Lee, Hyung-Min; Howell, Bryan; Grill, Warren M; Ghovanloo, Maysam

    2018-05-01

    The purpose of this study was to test the feasibility of using a switched-capacitor discharge stimulation (SCDS) system for electrical stimulation, and, subsequently, determine the overall energy saved compared to a conventional stimulator. We have constructed a computational model by pairing an image-based volume conductor model of the cat head with cable models of corticospinal tract (CST) axons and quantified the theoretical stimulation efficiency of rectangular and decaying exponential waveforms, produced by conventional and SCDS systems, respectively. Subsequently, the model predictions were tested in vivo by activating axons in the posterior internal capsule and recording evoked electromyography (EMG) in the contralateral upper arm muscles. Compared to rectangular waveforms, decaying exponential waveforms with time constants >500 μs were predicted to require 2%-4% less stimulus energy to activate directly models of CST axons and 0.4%-2% less stimulus energy to evoke EMG activity in vivo. Using the calculated wireless input energy of the stimulation system and the measured stimulus energies required to evoke EMG activity, we predict that an SCDS implantable pulse generator (IPG) will require 40% less input energy than a conventional IPG to activate target neural elements. A wireless SCDS IPG that is more energy efficient than a conventional IPG will reduce the size of an implant, require that less wireless energy be transmitted through the skin, and extend the lifetime of the battery in the external power transmitter.

  20. Second cancer risk after 3D-CRT, IMRT and VMAT for breast cancer.

    PubMed

    Abo-Madyan, Yasser; Aziz, Muhammad Hammad; Aly, Moamen M O M; Schneider, Frank; Sperk, Elena; Clausen, Sven; Giordano, Frank A; Herskind, Carsten; Steil, Volker; Wenz, Frederik; Glatting, Gerhard

    2014-03-01

    Second cancer risk after breast conserving therapy is becoming more important due to improved long term survival rates. In this study, we estimate the risks for developing a solid second cancer after radiotherapy of breast cancer using the concept of organ equivalent dose (OED). Computer-tomography scans of 10 representative breast cancer patients were selected for this study. Three-dimensional conformal radiotherapy (3D-CRT), tangential intensity modulated radiotherapy (t-IMRT), multibeam intensity modulated radiotherapy (m-IMRT), and volumetric modulated arc therapy (VMAT) were planned to deliver a total dose of 50 Gy in 2 Gy fractions. Differential dose volume histograms (dDVHs) were created and the OEDs calculated. Second cancer risks of ipsilateral, contralateral lung and contralateral breast cancer were estimated using linear, linear-exponential and plateau models for second cancer risk. Compared to 3D-CRT, cumulative excess absolute risks (EAR) for t-IMRT, m-IMRT and VMAT were increased by 2 ± 15%, 131 ± 85%, 123 ± 66% for the linear-exponential risk model, 9 ± 22%, 82 ± 96%, 71 ± 82% for the linear and 3 ± 14%, 123 ± 78%, 113 ± 61% for the plateau model, respectively. Second cancer risk after 3D-CRT or t-IMRT is lower than for m-IMRT or VMAT by about 34% for the linear model and 50% for the linear-exponential and plateau models, respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Theory and procedures for finding a correct kinetic model for the bacteriorhodopsin photocycle.

    PubMed

    Hendler, R W; Shrager, R; Bose, S

    2001-04-26

    In this paper, we present the implementation and results of new methodology based on linear algebra. The theory behind these methods is covered in detail in the Supporting Information, available electronically (Shragerand Hendler). In brief, the methods presented search through all possible forward sequential submodels in order to find candidates that can be used to construct a complete model for the BR-photocycle. The methodology is limited only to forward sequential models. If no such models are compatible with the experimental data,none will be found. The procedures apply objective tests and filters to eliminate possibilities that cannot be correct, thus cutting the total number of candidate sequences to be considered. In the current application,which uses six exponentials, the total sequences were cut from 1950 to 49. The remaining sequences were further screened using known experimental criteria. The approach led to a solution which consists of a pair of sequences, one with 5 exponentials showing BR* f L(f) M(f) N O BR and the other with three exponentials showing BR* L(s) M(s) BR. The deduced complete kinetic model for the BR photocycle is thus either a single photocycle branched at the L intermediate or a pair of two parallel photocycles. Reasons for preferring the parallel photocycles are presented. Synthetic data constructed on the basis of the parallel photocycles were indistinguishable from the experimental data in a number of analytical tests that were applied.

  2. Preferential attachment and growth dynamics in complex systems

    NASA Astrophysics Data System (ADS)

    Yamasaki, Kazuko; Matia, Kaushik; Buldyrev, Sergey V.; Fu, Dongfeng; Pammolli, Fabio; Riccaboni, Massimo; Stanley, H. Eugene

    2006-09-01

    Complex systems can be characterized by classes of equivalency of their elements defined according to system specific rules. We propose a generalized preferential attachment model to describe the class size distribution. The model postulates preferential growth of the existing classes and the steady influx of new classes. According to the model, the distribution changes from a pure exponential form for zero influx of new classes to a power law with an exponential cut-off form when the influx of new classes is substantial. Predictions of the model are tested through the analysis of a unique industrial database, which covers both elementary units (products) and classes (markets, firms) in a given industry (pharmaceuticals), covering the entire size distribution. The model’s predictions are in good agreement with the data. The paper sheds light on the emergence of the exponent τ≈2 observed as a universal feature of many biological, social and economic problems.

  3. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  4. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  5. Identical superdeformed bands in yrast 152Dy: a systematic description

    NASA Astrophysics Data System (ADS)

    Dadwal, Anshul; Mittal, H. M.

    2018-06-01

    The nuclear softness (NS) formula, semiclassical particle rotor model (PRM) and modified exponential model with pairing attenuation are used for the systematic study of the identical superdeformed bands in the A ∼ 150 mass region. These formulae/models are employed to study the identical superdeformed bands relative to the yrast SD band 152Dy(1), {152Dy(1), 151Tb(2)}, {152Dy(1), 151Dy(4)} (midpoint), {152Dy(1), 153Dy(2)} (quarter point), {152Dy(1), 153Dy(3)} (three-quarter point). The parameters, baseline moment of inertia ({{I}}0), alignment (i) and effective pairing parameter (Δ0) are calculated using the least-squares fitting of the γ-ray transitions energies in the NS formula, semiclassical-PRM and modified exponential model with pairing attenuation, respectively. The calculated parameters are found to depend sensitively on the proposed baseline spin (I 0).

  6. On the modeling of breath-by-breath oxygen uptake kinetics at the onset of high-intensity exercises: simulated annealing vs. GRG2 method.

    PubMed

    Bernard, Olivier; Alata, Olivier; Francaux, Marc

    2006-03-01

    Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.

  7. QMRA for Drinking Water: 1. Revisiting the Mathematical Structure of Single-Hit Dose-Response Models.

    PubMed

    Nilsen, Vegard; Wyller, John

    2016-01-01

    Dose-response models are essential to quantitative microbial risk assessment (QMRA), providing a link between levels of human exposure to pathogens and the probability of negative health outcomes. In drinking water studies, the class of semi-mechanistic models known as single-hit models, such as the exponential and the exact beta-Poisson, has seen widespread use. In this work, an attempt is made to carefully develop the general mathematical single-hit framework while explicitly accounting for variation in (1) host susceptibility and (2) pathogen infectivity. This allows a precise interpretation of the so-called single-hit probability and precise identification of a set of statistical independence assumptions that are sufficient to arrive at single-hit models. Further analysis of the model framework is facilitated by formulating the single-hit models compactly using probability generating and moment generating functions. Among the more practically relevant conclusions drawn are: (1) for any dose distribution, variation in host susceptibility always reduces the single-hit risk compared to a constant host susceptibility (assuming equal mean susceptibilities), (2) the model-consistent representation of complete host immunity is formally demonstrated to be a simple scaling of the response, (3) the model-consistent expression for the total risk from repeated exposures deviates (gives lower risk) from the conventional expression used in applications, and (4) a model-consistent expression for the mean per-exposure dose that produces the correct total risk from repeated exposures is developed. © 2016 Society for Risk Analysis.

  8. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  9. Base stock system for patient vs impatient customers with varying demand distribution

    NASA Astrophysics Data System (ADS)

    Fathima, Dowlath; Uduman, P. Sheik

    2013-09-01

    An optimal Base-Stock inventory policy for Patient and Impatient Customers using finite-horizon models is examined. The Base stock system for Patient and Impatient customers is a different type of inventory policy. In case of the model I, Base stock for Patient customer case is evaluated using the Truncated Exponential Distribution. The model II involves the study of Base-stock inventory policies for Impatient customer. A study on these systems reveals that the Customers wait until the arrival of the next order or the customers leaves the system which leads to lost sale. In both the models demand during the period [0, t] is taken to be a random variable. In this paper, Truncated Exponential Distribution satisfies the Base stock policy for the patient customer as a continuous model. So far the Base stock for Impatient Customers leaded to a discrete case but, in this paper we have modeled this condition into a continuous case. We justify this approach mathematically and also numerically.

  10. Simulation of the modulation transfer function dependent on the partial Fourier fraction in dynamic contrast enhancement magnetic resonance imaging.

    PubMed

    Takatsu, Yasuo; Ueyama, Tsuyoshi; Miyati, Tosiaki; Yamamura, Kenichirou

    2016-12-01

    The image characteristics in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) depend on the partial Fourier fraction and contrast medium concentration. These characteristics were assessed and the modulation transfer function (MTF) was calculated by computer simulation. A digital phantom was created from signal intensity data acquired at different contrast medium concentrations on a breast model. The frequency images [created by fast Fourier transform (FFT)] were divided into 512 parts and rearranged to form a new image. The inverse FFT of this image yielded the MTF. From the reference data, three linear models (low, medium, and high) and three exponential models (slow, medium, and rapid) of the signal intensity were created. Smaller partial Fourier fractions, and higher gradients in the linear models, corresponded to faster MTF decline. The MTF more gradually decreased in the exponential models than in the linear models. The MTF, which reflects the image characteristics in DCE-MRI, was more degraded as the partial Fourier fraction decreased.

  11. Gender Differences in the Effects of the Frequency of Physical Activity on the Incidence of Metabolic Syndrome: Results from a Middle-Aged Community Cohort in Taiwan.

    PubMed

    Chen, Sheng-Pyng; Chang, Huan-Cheng; Hsiao, Tien-Mu; Yeh, Chih-Jung; Yang, Hao-Jan

    2018-06-01

    Little is known about how the frequency of physical activity in adults influences the occurrence of metabolic syndrome (MetS), and whether there are gender differences within these effects. In this study, 3368 residents from the established "Landseed Cohort" underwent three waves of health examinations, and those who did not have MetS at baseline were selected and analyzed using a multiple Poisson regression model. By calculating the adjusted relative risk (ARR), the linear and nonlinear relationships between the frequency of physical activity and risk of developing MetS were examined for male and female participants. The prevalence of MetS was fairly stable across the three waves (ranging from 16.24% to 16.82%), but the incidence dropped from 7.11% to 4.52%. The risk of MetS in women was 10 times higher than that in men (ARR = 10.06; 95% CI = 6.60-15.33), and frequent exercise was shown to help prevent it. The frequency of exercise had a linear dose-response effect in females and an exponential protective effect in males on the occurrence of MetS. Exercising more than four times a week for females and twice or more a week for males effectively reduced the risk of developing MetS. The frequency of physical activity in adults was negatively related to the risk of developing MetS, and this relationship differed based on gender. The protective effect of physical activity on MetS was linear in females and exponential in males.

  12. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  13. Estimation of the light field inside photosynthetic microorganism cultures through Mittag-Leffler functions at depleted light conditions

    NASA Astrophysics Data System (ADS)

    Fuente, David; Lizama, Carlos; Urchueguía, Javier F.; Conejero, J. Alberto

    2018-01-01

    Light attenuation within suspensions of photosynthetic microorganisms has been widely described by the Lambert-Beer equation. However, at depths where most of the light has been absorbed by the cells, light decay deviates from the exponential behaviour and shows a lower attenuation than the corresponding from the purely exponential fall. This discrepancy can be modelled through the Mittag-Leffler function, extending Lambert-Beer law via a tuning parameter α that takes into account the attenuation process. In this work, we describe a fractional Lambert-Beer law to estimate light attenuation within cultures of model organism Synechocystis sp. PCC 6803. Indeed, we benchmark the measured light field inside cultures of two different Synechocystis strains, namely the wild-type and the antenna mutant strain called Olive at five different cell densities, with our in silico results. The Mittag-Leffler hyper-parameter α that best fits the data is 0.995, close to the exponential case. One of the most striking results to emerge from this work is that unlike prior literature on the subject, this one provides experimental evidence on the validity of fractional calculus for determining the light field. We show that by applying the fractional Lambert-Beer law for describing light attenuation, we are able to properly model light decay in photosynthetic microorganisms suspensions.

  14. Intermittent fluctuations in the Alcator C-Mod scrape-off layer for ohmic and high confinement mode plasmas

    NASA Astrophysics Data System (ADS)

    Garcia, O. E.; Kube, R.; Theodorsen, A.; LaBombard, B.; Terry, J. L.

    2018-05-01

    Plasma fluctuations in the scrape-off layer of the Alcator C-Mod tokamak in ohmic and high confinement modes have been analyzed using gas puff imaging data. In all cases investigated, the time series of emission from a single spatially resolved view into the gas puff are dominated by large-amplitude bursts, attributed to blob-like filament structures moving radially outwards and poloidally. There is a remarkable similarity of the fluctuation statistics in ohmic plasmas and in edge localized mode-free and enhanced D-alpha high confinement mode plasmas. Conditionally averaged waveforms have a two-sided exponential shape with comparable temporal scales and asymmetry, while the burst amplitudes and the waiting times between them are exponentially distributed. The probability density functions and the frequency power spectral densities are similar for all these confinement modes. These results provide strong evidence in support of a stochastic model describing the plasma fluctuations in the scrape-off layer as a super-position of uncorrelated exponential pulses. Predictions of this model are in excellent agreement with experimental measurements in both ohmic and high confinement mode plasmas. The stochastic model thus provides a valuable tool for predicting fluctuation-induced plasma-wall interactions in magnetically confined fusion plasmas.

  15. From Experiment to Theory: What Can We Learn from Growth Curves?

    PubMed

    Kareva, Irina; Karev, Georgy

    2018-01-01

    Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.

  16. Power function decay of hydraulic conductivity for a TOPMODEL-based infiltration routine

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Endreny, Theodore A.; Hassett, James M.

    2006-11-01

    TOPMODEL rainfall-runoff hydrologic concepts are based on soil saturation processes, where soil controls on hydrograph recession have been represented by linear, exponential, and power function decay with soil depth. Although these decay formulations have been incorporated into baseflow decay and topographic index computations, only the linear and exponential forms have been incorporated into infiltration subroutines. This study develops a power function formulation of the Green and Ampt infiltration equation for the case where the power n = 1 and 2. This new function was created to represent field measurements in the New York City, USA, Ward Pound Ridge drinking water supply area, and provide support for similar sites reported by other researchers. Derivation of the power-function-based Green and Ampt model begins with the Green and Ampt formulation used by Beven in deriving an exponential decay model. Differences between the linear, exponential, and power function infiltration scenarios are sensitive to the relative difference between rainfall rates and hydraulic conductivity. Using a low-frequency 30 min design storm with 4.8 cm h-1 rain, the n = 2 power function formulation allows for a faster decay of infiltration and more rapid generation of runoff. Infiltration excess runoff is rare in most forested watersheds, and advantages of the power function infiltration routine may primarily include replication of field-observed processes in urbanized areas and numerical consistency with power function decay of baseflow and topographic index distributions. Equation development is presented within a TOPMODEL-based Ward Pound Ridge rainfall-runoff simulation. Copyright

  17. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  18. Pressure resistance of cold-shocked Escherichia coli O157:H7 in ground beef, beef gravy and peptone water.

    PubMed

    Baccus-Taylor, G S H; Falloon, O C; Henry, N

    2015-06-01

    (i) To study the effects of cold shock on Escherichia coli O157:H7 cells. (ii) To determine if cold-shocked E. coli O157:H7 cells at stationary and exponential phases are more pressure-resistant than their non-cold-shocked counterparts. (iii) To investigate the baro-protective role of growth media (0·1% peptone water, beef gravy and ground beef). Quantitative estimates of lethality and sublethal injury were made using the differential plating method. There were no significant differences (P > 0·05) in the number of cells killed; cold-shocked or non-cold-shocked. Cells grown in ground beef (stationary and exponential phases) experienced lowest death compared with peptone water and beef gravy. Cold-shock treatment increased the sublethal injury to cells cultured in peptone water (stationary and exponential phases) and ground beef (exponential phase), but decreased the sublethal injury to cells in beef gravy (stationary phase). Cold shock did not confer greater resistance to stationary or exponential phase cells pressurized in peptone water, beef gravy or ground beef. Ground beef had the greatest baro-protective effect. Real food systems should be used in establishing food safety parameters for high-pressure treatments; micro-organisms are less resistant in model food systems, the use of which may underestimate the organisms' resistance. © 2015 The Society for Applied Microbiology.

  19. Self similar flow behind an exponential shock wave in a self-gravitating, rotating, axisymmetric dusty gas with heat conduction and radiation heat flux

    NASA Astrophysics Data System (ADS)

    Bajargaan, Ruchi; Patel, Arvind

    2018-04-01

    One-dimensional unsteady adiabatic flow behind an exponential shock wave propagating in a self-gravitating, rotating, axisymmetric dusty gas with heat conduction and radiation heat flux, which has exponentially varying azimuthal and axial fluid velocities, is investigated. The shock wave is driven out by a piston moving with time according to an exponential law. The dusty gas is taken to be a mixture of a non-ideal gas and small solid particles. The density of the ambient medium is assumed to be constant. The equilibrium flow conditions are maintained and energy is varying exponentially, which is continuously supplied by the piston. The heat conduction is expressed in the terms of Fourier's law, and the radiation is assumed of diffusion type for an optically thick grey gas model. The thermal conductivity and the absorption coefficient are assumed to vary with temperature and density according to a power law. The effects of the variation of heat transfer parameters, gravitation parameter and dusty gas parameters on the shock strength, the distance between the piston and the shock front, and on the flow variables are studied out in detail. It is interesting to note that the similarity solution exists under the constant initial angular velocity, and the shock strength is independent from the self gravitation, heat conduction and radiation heat flux.

  20. Global exponential stability of inertial memristor-based neural networks with time-varying delays and impulses.

    PubMed

    Zhang, Wei; Huang, Tingwen; He, Xing; Li, Chuandong

    2017-11-01

    In this study, we investigate the global exponential stability of inertial memristor-based neural networks with impulses and time-varying delays. We construct inertial memristor-based neural networks based on the characteristics of the inertial neural networks and memristor. Impulses with and without delays are considered when modeling the inertial neural networks simultaneously, which are of great practical significance in the current study. Some sufficient conditions are derived under the framework of the Lyapunov stability method, as well as an extended Halanay differential inequality and a new delay impulsive differential inequality, which depend on impulses with and without delays, in order to guarantee the global exponential stability of the inertial memristor-based neural networks. Finally, two numerical examples are provided to illustrate the efficiency of the proposed methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. How bootstrap can help in forecasting time series with more than one seasonal pattern

    NASA Astrophysics Data System (ADS)

    Cordeiro, Clara; Neves, M. Manuela

    2012-09-01

    The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.

  2. Exponential Boundary Observers for Pressurized Water Pipe

    NASA Astrophysics Data System (ADS)

    Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel

    2015-11-01

    This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.

  3. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  4. Species area relationships in mediterranean-climate plant communities

    USGS Publications Warehouse

    Keeley, Jon E.; Fotheringham, C.J.

    2003-01-01

    Aim To determine the best-fit model of species–area relationships for Mediterranean-type plant communities and evaluate how community structure affects these species–area models.Location Data were collected from California shrublands and woodlands and compared with literature reports for other Mediterranean-climate regions.Methods The number of species was recorded from 1, 100 and 1000 m2 nested plots. Best fit to the power model or exponential model was determined by comparing adjusted r2 values from the least squares regression, pattern of residuals, homoscedasticity across scales, and semi-log slopes at 1–100 m2 and 100–1000 m2. Dominance–diversity curves were tested for fit to the lognormal model, MacArthur's broken stick model, and the geometric and harmonic series.Results Early successional Western Australia and California shrublands represented the extremes and provide an interesting contrast as the exponential model was the best fit for the former, and the power model for the latter, despite similar total species richness. We hypothesize that structural differences in these communities account for the different species–area curves and are tied to patterns of dominance, equitability and life form distribution. Dominance–diversity relationships for Western Australian heathlands exhibited a close fit to MacArthur's broken stick model, indicating more equitable distribution of species. In contrast, Californian shrublands, both postfire and mature stands, were best fit by the geometric model indicating strong dominance and many minor subordinate species. These regions differ in life form distribution, with annuals being a major component of diversity in early successional Californian shrublands although they are largely lacking in mature stands. Both young and old Australian heathlands are dominated by perennials, and annuals are largely absent. Inherent in all of these ecosystems is cyclical disequilibrium caused by periodic fires. The potential for community reassembly is greater in Californian shrublands where only a quarter of the flora resprout, whereas three quarters resprout in Australian heathlands.Other Californian vegetation types sampled include coniferous forests, oak savannas and desert scrub, and demonstrate that different community structures may lead to a similar species–area relationship. Dominance–diversity relationships for coniferous forests closely follow a geometric model whereas associated oak savannas show a close fit to the lognormal model. However, for both communities, species–area curves fit a power model. The primary driver appears to be the presence of annuals. Desert scrub communities illustrate dramatic changes in both species diversity and dominance–diversity relationships in high and low rainfall years, because of the disappearance of annuals in drought years.Main conclusions Species–area curves for immature shrublands in California and the majority of Mediterranean plant communities fit a power function model. Exceptions that fit the exponential model are not because of sampling error or scaling effects, rather structural differences in these communities provide plausible explanations. The exponential species–area model may arise in more than one way. In the highly diverse Australian heathlands it results from a rapid increase in species richness at small scales. In mature California shrublands it results from very depauperate richness at the community scale. In both instances the exponential model is tied to a preponderance of perennials and paucity of annuals. For communities fit by a power model, coefficients z and log c exhibit a number of significant correlations with other diversity parameters, suggesting that they have some predictive value in ecological communities.

  5. Evaluation of different mathematical models and different b-value ranges of diffusion-weighted imaging in peripheral zone prostate cancer detection using b-value up to 4500 s/mm2

    PubMed Central

    Feng, Zhaoyan; Min, Xiangde; Margolis, Daniel J. A.; Duan, Caohui; Chen, Yuping; Sah, Vivek Kumar; Chaudhary, Nabin; Li, Basen; Ke, Zan; Zhang, Peipei; Wang, Liang

    2017-01-01

    Objectives To evaluate the diagnostic performance of different mathematical models and different b-value ranges of diffusion-weighted imaging (DWI) in peripheral zone prostate cancer (PZ PCa) detection. Methods Fifty-six patients with histologically proven PZ PCa who underwent DWI-magnetic resonance imaging (MRI) using 21 b-values (0–4500 s/mm2) were included. The mean signal intensities of the regions of interest (ROIs) placed in benign PZs and cancerous tissues on DWI images were fitted using mono-exponential, bi-exponential, stretched-exponential, and kurtosis models. The b-values were divided into four ranges: 0–1000, 0–2000, 0–3200, and 0–4500 s/mm2, grouped as A, B, C, and D, respectively. ADC, , D*, f, DDC, α, Dapp, and Kapp were estimated for each group. The adjusted coefficient of determination (R2) was calculated to measure goodness-of-fit. Receiver operating characteristic curve analysis was performed to evaluate the diagnostic performance of the parameters. Results All parameters except D* showed significant differences between cancerous tissues and benign PZs in each group. The area under the curve values (AUCs) of ADC were comparable in groups C and D (p = 0.980) and were significantly higher than those in groups A and B (p< 0.05 for all). The AUCs of ADC and Kapp in groups B and C were similar (p = 0.07 and p = 0.954), and were significantly higher than the other parameters (p< 0.001 for all). The AUCs of ADC in group D was slightly higher than Kapp (p = 0.002), and both were significantly higher than the other parameters (p< 0.001 for all). Conclusions ADC derived from conventional mono-exponential high b-value (3200 s/mm2) models is an optimal parameter for PZ PCa detection. PMID:28199367

  6. Are infant mortality rate declines exponential? The general pattern of 20th century infant mortality rate decline

    PubMed Central

    Bishai, David; Opuni, Marjorie

    2009-01-01

    Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144

  7. Mathematical modelling of the growth of human fetus anatomical structures.

    PubMed

    Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech

    2017-09-01

    The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.

  8. The need for data science in epidemic modelling. Comment on: "Mathematical models to characterize early epidemic growth: A review" by Gerardo Chowell et al.

    NASA Astrophysics Data System (ADS)

    Danon, Leon; Brooks-Pollock, Ellen

    2016-09-01

    In their review, Chowell et al. consider the ability of mathematical models to predict early epidemic growth [1]. In particular, they question the central prediction of classical differential equation models that the number of cases grows exponentially during the early stages of an epidemic. Using examples including HIV and Ebola, they argue that classical models fail to capture key qualitative features of early growth and describe a selection of models that do capture non-exponential epidemic growth. An implication of this failure is that predictions may be inaccurate and unusable, highlighting the need for care when embarking upon modelling using classical methodology. There remains a lack of understanding of the mechanisms driving many observed epidemic patterns; we argue that data science should form a fundamental component of epidemic modelling, providing a rigorous methodology for data-driven approaches, rather than trying to enforce established frameworks. The need for refinement of classical models provides a strong argument for the use of data science, to identify qualitative characteristics and pinpoint the mechanisms responsible for the observed epidemic patterns.

  9. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  10. Classical Mathematical Models for Description and Prediction of Experimental Tumor Growth

    PubMed Central

    Benzekry, Sébastien; Lamont, Clare; Beheshti, Afshin; Tracz, Amanda; Ebos, John M. L.; Hlatky, Lynn; Hahnfeldt, Philip

    2014-01-01

    Despite internal complexity, tumor growth kinetics follow relatively simple laws that can be expressed as mathematical models. To explore this further, quantitative analysis of the most classical of these were performed. The models were assessed against data from two in vivo experimental systems: an ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically xenografted human breast carcinoma. The goals were threefold: 1) to determine a statistical model for description of the measurement error, 2) to establish the descriptive power of each model, using several goodness-of-fit metrics and a study of parametric identifiability, and 3) to assess the models' ability to forecast future tumor growth. The models included in the study comprised the exponential, exponential-linear, power law, Gompertz, logistic, generalized logistic, von Bertalanffy and a model with dynamic carrying capacity. For the breast data, the dynamics were best captured by the Gompertz and exponential-linear models. The latter also exhibited the highest predictive power, with excellent prediction scores (≥80%) extending out as far as 12 days in the future. For the lung data, the Gompertz and power law models provided the most parsimonious and parametrically identifiable description. However, not one of the models was able to achieve a substantial prediction rate (≥70%) beyond the next day data point. In this context, adjunction of a priori information on the parameter distribution led to considerable improvement. For instance, forecast success rates went from 14.9% to 62.7% when using the power law model to predict the full future tumor growth curves, using just three data points. These results not only have important implications for biological theories of tumor growth and the use of mathematical modeling in preclinical anti-cancer drug investigations, but also may assist in defining how mathematical models could serve as potential prognostic tools in the clinic. PMID:25167199

  11. Classical mathematical models for description and prediction of experimental tumor growth.

    PubMed

    Benzekry, Sébastien; Lamont, Clare; Beheshti, Afshin; Tracz, Amanda; Ebos, John M L; Hlatky, Lynn; Hahnfeldt, Philip

    2014-08-01

    Despite internal complexity, tumor growth kinetics follow relatively simple laws that can be expressed as mathematical models. To explore this further, quantitative analysis of the most classical of these were performed. The models were assessed against data from two in vivo experimental systems: an ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically xenografted human breast carcinoma. The goals were threefold: 1) to determine a statistical model for description of the measurement error, 2) to establish the descriptive power of each model, using several goodness-of-fit metrics and a study of parametric identifiability, and 3) to assess the models' ability to forecast future tumor growth. The models included in the study comprised the exponential, exponential-linear, power law, Gompertz, logistic, generalized logistic, von Bertalanffy and a model with dynamic carrying capacity. For the breast data, the dynamics were best captured by the Gompertz and exponential-linear models. The latter also exhibited the highest predictive power, with excellent prediction scores (≥80%) extending out as far as 12 days in the future. For the lung data, the Gompertz and power law models provided the most parsimonious and parametrically identifiable description. However, not one of the models was able to achieve a substantial prediction rate (≥70%) beyond the next day data point. In this context, adjunction of a priori information on the parameter distribution led to considerable improvement. For instance, forecast success rates went from 14.9% to 62.7% when using the power law model to predict the full future tumor growth curves, using just three data points. These results not only have important implications for biological theories of tumor growth and the use of mathematical modeling in preclinical anti-cancer drug investigations, but also may assist in defining how mathematical models could serve as potential prognostic tools in the clinic.

  12. Numerical Study of Cattaneo-Christov Heat Flux Model for Viscoelastic Flow Due to an Exponentially Stretching Surface.

    PubMed

    Ahmad Khan, Junaid; Mustafa, M; Hayat, T; Alsaedi, A

    2015-01-01

    This work deals with the flow and heat transfer in upper-convected Maxwell fluid above an exponentially stretching surface. Cattaneo-Christov heat flux model is employed for the formulation of the energy equation. This model can predict the effects of thermal relaxation time on the boundary layer. Similarity approach is utilized to normalize the governing boundary layer equations. Local similarity solutions are achieved by shooting approach together with fourth-fifth-order Runge-Kutta integration technique and Newton's method. Our computations reveal that fluid temperature has inverse relationship with the thermal relaxation time. Further the fluid velocity is a decreasing function of the fluid relaxation time. A comparison of Fourier's law and the Cattaneo-Christov's law is also presented. Present attempt even in the case of Newtonian fluid is not yet available in the literature.

  13. Rainbow net analysis of VAXcluster system availability

    NASA Technical Reports Server (NTRS)

    Johnson, Allen M., Jr.; Schoenfelder, Michael A.

    1991-01-01

    A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.

  14. The Kepler Light Curve of V344 LYR: Constraining the Thermal-Viscous Limit Cycle Instability

    NASA Technical Reports Server (NTRS)

    Cannizzo, J. K.; Still, M. D.; Howell, S. B.; Wood, M. A.; Smale, A. P.

    2010-01-01

    We present time dependent modeling based on the accretion disk limit cycle model for a 90 d light curve of the short period SU UMa-type dwarf nova V344 Lyr taken by Kepler. The unprecedented precision and cadence (1 minute) far surpass that generally available for long term light curves. The data encompass a super outburst, preceded by three normal (i.e., short) outbursts and followed by two normal outbursts. The main decay of the super outburst is nearly perfectly exponential, decaying at a rate approx.12 d/mag, while the much more rapid decays of the normal outbursts exhibit a faster-than-exponential shape. We show that the standard limit cycle model can account for the light curve, without the need for either the thermal-tidal instability or enhanced mass transfer.

  15. Pattern analysis of total item score and item response of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative sample of US adults

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.

    2017-01-01

    Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560

  16. Isolation and characterization of large spectrum and multiple bacteriocin-producing Enterococcus faecium strain from raw bovine milk.

    PubMed

    Gaaloul, N; ben Braiek, O; Hani, K; Volski, A; Chikindas, M L; Ghrairi, T

    2015-02-01

    To assess the antimicrobial properties of lactic acid bacteria from Tunisian raw bovine milk. A bacteriocin-producing Enterococcus faecium strain was isolated from raw cow milk with activity against Gram-positive and Gram-negative bacteria. Antimicrobial substances produced by this strain were sensitive to proteolytic enzymes and were thermostable and resistant to a broad range of pH (2-10). Mode of action of antimicrobial substances was determined as bactericidal. Maximum activity was reached at the end of the exponential growth phase when checked against Listeria ivanovii BUG 496 (2366.62 AU ml(-1)). However, maximum antimicrobial activity against Pseudomonas aeruginosa 28753 was recorded at the beginning of the exponential growth phase. Enterococcus faecium GGN7 was characterized as free from virulence factors and was susceptible to tested antibiotics. PCR analysis of the micro-organism's genome revealed the presence of genes coding for enterocins A and B. Mass spectrometry analysis of RP-HPLC active fractions showed molecular masses corresponding to enterocins A (4835.77 Da) and B (5471.56 Da), and a peptide with a molecular mass of 3215.5 Da active only against Gram-negative indicator strains. The latter was unique in the databases. Enterococcus faecium GGN7 produces three bacteriocins with different inhibitory spectra. Based on its antimicrobial properties and safety, Ent. faecium GGN7 is potentially useful for food biopreservation. The results suggest the bacteriocins from GGN7 strain could be useful for food biopreservation. © 2014 The Society for Applied Microbiology.

  17. Kaluza-Klein cosmological model in f(R, T) gravity with Λ(T)

    NASA Astrophysics Data System (ADS)

    Sahoo, P. K.; Mishra, B.; Tripathy, S. K.

    2016-04-01

    A class of Kaluza-Klein cosmological models in $f(R,T)$ theory of gravity have been investigated. In the work, we have considered the functional $f(R,T)$ to be in the form $f(R,T)=f(R)+f(T)$ with $f(R)=\\lambda R$ and $f(T)=\\lambda T$. Such a choice of the functional $f(R,T)$ leads to an evolving effective cosmological constant $\\Lambda$ which depends on the stress energy tensor. The source of the matter field is taken to be a perfect cosmic fluid. The exact solutions of the field equations are obtained by considering a constant deceleration parameter which leads two different aspects of the volumetric expansion namely a power law and an exponential volumetric expansion. Keeping an eye on the accelerating nature of the universe in the present epoch, the dynamics and physical behaviour of the models have been discussed. From statefinder diagnostic pair we found that the model with exponential volumetric expansion behaves more like a $\\Lambda$CDM model.

  18. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  19. New results for global exponential synchronization in neural networks via functional differential inclusions.

    PubMed

    Wang, Dongshu; Huang, Lihong; Tang, Longkun

    2015-08-01

    This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.

  20. Time since death and decay rate constants of Norway spruce and European larch deadwood in subalpine forests determined using dendrochronology and radiocarbon dating

    NASA Astrophysics Data System (ADS)

    Petrillo, M.; Cherubini, P.; Fravolini, G.; Ascher, J.; Schärer, M.; Synal, H.-A.; Bertoldi, D.; Camin, F.; Larcher, R.; Egli, M.

    2015-09-01

    Due to the large size and highly heterogeneous spatial distribution of deadwood, the time scales involved in the coarse woody debris (CWD) decay of Picea abies (L.) Karst. and Larix decidua Mill. in Alpine forests have been poorly investigated and are largely unknown. We investigated the CWD decay dynamics in an Alpine valley in Italy using the five-decay class system commonly employed for forest surveys, based on a macromorphological and visual assessment. For the decay classes 1 to 3, most of the dendrochronological samples were cross-dated to assess the time that had elapsed since tree death, but for decay classes 4 and 5 (poorly preserved tree rings) and some others not having enough tree rings, radiocarbon dating was used. In addition, density, cellulose and lignin data were measured for the dated CWD. The decay rate constants for spruce and larch were estimated on the basis of the density loss using a single negative exponential model. In the decay classes 1 to 3, the ages of the CWD were similar varying between 1 and 54 years for spruce and 3 and 40 years for larch with no significant differences between the classes; classes 1-3 are therefore not indicative for deadwood age. We found, however, distinct tree species-specific differences in decay classes 4 and 5, with larch CWD reaching an average age of 210 years in class 5 and spruce only 77 years. The mean CWD rate constants were 0.012 to 0.018 yr-1 for spruce and 0.005 to 0.012 yr-1 for larch. Cellulose and lignin time trends half-lives (using a multiple-exponential model) could be derived on the basis of the ages of the CWD. The half-lives for cellulose were 21 yr for spruce and 50 yr for larch. The half-life of lignin is considerably higher and may be more than 100 years in larch CWD.

Top