Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
1996-09-16
approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in
Multiserver Queueing Model subject to Single Exponential Vacation
NASA Astrophysics Data System (ADS)
Vijayashree, K. V.; Janani, B.
2018-04-01
A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.
A mechanical model of bacteriophage DNA ejection
NASA Astrophysics Data System (ADS)
Arun, Rahul; Ghosal, Sandip
2017-08-01
Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
A new approach to the extraction of single exponential diode model parameters
NASA Astrophysics Data System (ADS)
Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.
2018-06-01
A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-01-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-06-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.
Modeling the Role of Dislocation Substructure During Class M and Exponential Creep. Revised
NASA Technical Reports Server (NTRS)
Raj, S. V.; Iskovitz, Ilana Seiden; Freed, A. D.
1995-01-01
The different substructures that form in the power-law and exponential creep regimes for single phase crystalline materials under various conditions of stress, temperature and strain are reviewed. The microstructure is correlated both qualitatively and quantitatively with power-law and exponential creep as well as with steady state and non-steady state deformation behavior. These observations suggest that creep is influenced by a complex interaction between several elements of the microstructure, such as dislocations, cells and subgrains. The stability of the creep substructure is examined in both of these creep regimes during stress and temperature change experiments. These observations are rationalized on the basis of a phenomenological model, where normal primary creep is interpreted as a series of constant structure exponential creep rate-stress relationships. The implications of this viewpoint on the magnitude of the stress exponent and steady state behavior are discussed. A theory is developed to predict the macroscopic creep behavior of a single phase material using quantitative microstructural data. In this technique the thermally activated deformation mechanisms proposed by dislocation physics are interlinked with a previously developed multiphase, three-dimensional. dislocation substructure creep model. This procedure leads to several coupled differential equations interrelating macroscopic creep plasticity with microstructural evolution.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg
2010-11-17
The reported relaxation time for several typical glass-forming liquids was analyzed by using a kinetic model for liquids which invoked a new kind of atomic cooperativity--thermodynamic cooperativity. The broadly studied 'cooperative length' was recognized as the kinetic cooperativity. Both cooperativities were conveniently quantified from the measured relaxation data. A single-exponential activation behavior was uncovered behind the super-Arrhenius relaxations for the liquids investigated. Hence the mesostructure of these liquids and the atomic mechanism of the glass transition became clearer.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
Ouyang, Wenjun; Subotnik, Joseph E
2017-05-07
Using the Anderson-Holstein model, we investigate charge transfer dynamics between a molecule and a metal surface for two extreme cases. (i) With a large barrier, we show that the dynamics follow a single exponential decay as expected; (ii) without any barrier, we show that the dynamics are more complicated. On the one hand, if the metal-molecule coupling is small, single exponential dynamics persist. On the other hand, when the coupling between the metal and the molecule is large, the dynamics follow a biexponential decay. We analyze the dynamics using the Smoluchowski equation, develop a simple model, and explore the consequences of biexponential dynamics for a hypothetical cyclic voltammetry experiment.
1994-01-01
Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085
Holder, J P; Benedetti, L R; Bradley, D K
2016-11-01
Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Verification of the exponential model of body temperature decrease after death in pigs.
Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal
2005-09-01
The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.
Weighted Scaling in Non-growth Random Networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li
2012-09-01
We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian
2012-08-01
We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.
Schroeder, Indra; Hansen, Ulf-Peter
2008-04-01
Patch clamp experiments on single MaxiK channels expressed in HEK293 cells were performed at high temporal resolution (50-kHz filter) in asymmetrical solutions containing 0, 25, 50, or 150 mM Tl+ on the luminal or cytosolic side with [K+] + [Tl+] = 150 mM and 150 mM K+ on the other side. Outward current in the presence of cytosolic Tl+ did not show fast gating behavior that was significantly different from that in the absence of Tl+. With luminal Tl+ and at membrane potentials more negative than -40 mV, the single-channel current showed a negative slope resistance concomitantly with a flickery block, resulting in an artificially reduced apparent single-channel current I(app). The analysis of the amplitude histograms by beta distributions enabled the estimation of the true single-channel current and the determination of the rate constants of a simple two-state O-C Markov model for the gating in the bursts. The voltage dependence of the gating ratio R = I(true)/I(app) = (k(CO) + k(OC))/k(CO) could be described by exponential functions with different characteristic voltages above or below 50 mM Tl(+). The true single-channel current I(true) decreased with Tl+ concentrations up to 50 mM and stayed constant thereafter. Different models were considered. The most likely ones related the exponential increase of the gating ratio to ion depletion at the luminal side of the selectivity filter, whereas the influence of [Tl+] on the characteristic voltage of these exponential functions and of the value of I(true) were determined by [Tl+] at the inner side of the selectivity filter or in the cavity.
Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali
2010-01-01
The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985
Cosmological models constructed by van der Waals fluid approximation and volumetric expansion
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Myrzakulov, R.
The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Use of Continuous Exponential Families to Link Forms via Anchor Tests. Research Report. ETS RR-11-11
ERIC Educational Resources Information Center
Haberman, Shelby J.; Yan, Duanli
2011-01-01
Continuous exponential families are applied to linking test forms via an internal anchor. This application combines work on continuous exponential families for single-group designs and work on continuous exponential families for equivalent-group designs. Results are compared to those for kernel and equipercentile equating in the case of chained…
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
Compatible estimators of the components of change for a rotating panel forest inventory design
Francis A. Roesch
2007-01-01
This article presents two approaches for estimating the components of forest change utilizing data from a rotating panel sample design. One approach uses a variant of the exponentially weighted moving average estimator and the other approach uses mixed estimation. Three general transition models were each combined with a single compatibility model for the mixed...
The size distribution of Pacific Seamounts
NASA Astrophysics Data System (ADS)
Smith, Deborah K.; Jordan, Thomas H.
1987-11-01
An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.
Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.
García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A
2017-01-01
A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.
Self-organized Segregation on the Grid
NASA Astrophysics Data System (ADS)
Omidvar, Hamed; Franceschetti, Massimo
2018-02-01
We consider an agent-based model with exponentially distributed waiting times in which two types of agents interact locally over a graph, and based on this interaction and on the value of a common intolerance threshold τ , decide whether to change their types. This is equivalent to a zero-temperature ising model with Glauber dynamics, an asynchronous cellular automaton with extended Moore neighborhoods, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. Some rigorous results were recently obtained in the theoretical computer science literature, and this work provides several extensions. We enlarge the intolerance interval leading to the expected formation of large segregated regions of agents of a single type from the known size ɛ >0 to size ≈ 0.134. Namely, we show that for 0.433< τ < 1/2 (and by symmetry 1/2<τ <0.567), the expected size of the largest segregated region containing an arbitrary agent is exponential in the size of the neighborhood. We further extend the interval leading to expected large segregated regions to size ≈ 0.312 considering "almost segregated" regions, namely regions where the ratio of the number of agents of one type and the number of agents of the other type vanishes quickly as the size of the neighborhood grows. In this case, we show that for 0.344 < τ ≤ 0.433 (and by symmetry for 0.567 ≤ τ <0.656) the expected size of the largest almost segregated region containing an arbitrary agent is exponential in the size of the neighborhood. This behavior is reminiscent of supercritical percolation, where small clusters of empty sites can be observed within any sufficiently large region of the occupied percolation cluster. The exponential bounds that we provide also imply that complete segregation, where agents of a single type cover the whole grid, does not occur with high probability for p=1/2 and the range of intolerance considered.
Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.
2017-01-01
Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161
Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos
2017-01-01
The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.
Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C
1994-01-01
The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)
Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue
Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.
2004-01-01
The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
Scaling in the distribution of intertrade durations of Chinese stocks
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing
2008-10-01
The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
Observations on the methane oxidation capacity of landfill soils
USDA-ARS?s Scientific Manuscript database
Field data and two independent models indicate that landfill cover methane (CH4) oxidation should not be considered as a constant 10% or any other single value. Percent oxidation is a decreasing exponential function of the total methane flux rate into the cover and is also dependent on climate and c...
NASA Astrophysics Data System (ADS)
Féry, C.; Racine, B.; Vaufrey, D.; Doyeux, H.; Cinà, S.
2005-11-01
The main process responsible for the luminance degradation in organic light-emitting diodes (OLEDs) driven under constant current has not yet been identified. In this paper, we propose an approach to describe the intrinsic mechanisms involved in the OLED aging. We first show that a stretched exponential decay can be used to fit almost all the luminance versus time curves obtained under different driving conditions. In this way, we are able to prove that they can all be described by employing a single free parameter model. By using an approach based on local relaxation events, we will demonstrate that a single mechanism is responsible for the dominant aging process. Furthermore, we will demonstrate that the main relaxation event is the annihilation of one emissive center. We then use our model to fit all the experimental data measured under different driving condition, and show that by carefully fitting the accelerated luminance lifetime-curves, we can extrapolate the low-luminance lifetime needed for real display applications, with a high degree of accuracy.
Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment
NASA Astrophysics Data System (ADS)
Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok
2018-05-01
We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.
NASA Technical Reports Server (NTRS)
Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina
2010-01-01
Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event
NASA Astrophysics Data System (ADS)
Ismail, A.; Hassan, Noor I.
2013-09-01
Cancer is one of the principal causes of death in Malaysia. This study was performed to determine the pattern of rate of cancer deaths at a public hospital in Malaysia over an 11 year period from year 2001 to 2011, to determine the best fitted model of forecasting the rate of cancer deaths using Univariate Modeling and to forecast the rates for the next two years (2012 to 2013). The medical records of the death of patients with cancer admitted at this Hospital over 11 year's period were reviewed, with a total of 663 cases. The cancers were classified according to 10th Revision International Classification of Diseases (ICD-10). Data collected include socio-demographic background of patients such as registration number, age, gender, ethnicity, ward and diagnosis. Data entry and analysis was accomplished using SPSS 19.0 and Minitab 16.0. The five Univariate Models used were Naïve with Trend Model, Average Percent Change Model (ACPM), Single Exponential Smoothing, Double Exponential Smoothing and Holt's Method. The overall 11 years rate of cancer deaths showed that at this hospital, Malay patients have the highest percentage (88.10%) compared to other ethnic groups with males (51.30%) higher than females. Lung and breast cancer have the most number of cancer deaths among gender. About 29.60% of the patients who died due to cancer were aged 61 years old and above. The best Univariate Model used for forecasting the rate of cancer deaths is Single Exponential Smoothing Technique with alpha of 0.10. The forecast for the rate of cancer deaths shows a horizontally or flat value. The forecasted mortality trend remains at 6.84% from January 2012 to December 2013. All the government and private sectors and non-governmental organizations need to highlight issues on cancer especially lung and breast cancers to the public through campaigns using mass media, media electronics, posters and pamphlets in the attempt to decrease the rate of cancer deaths in Malaysia.
Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg
2011-04-20
Following the report of a single-exponential activation behavior behind the super-Arrhenius structural relaxation of glass-forming liquids in our preceding paper, we find that the non-exponentiality in the structural relaxation of glass-forming liquids is straightforwardly determined by the relaxation time, and could be calculated from the measured relaxation data. Comparisons between the calculated and measured non-exponentialities for typical glass-forming liquids, from fragile to intermediate, convincingly support the present analysis. Hence the origin of the non-exponentiality and its correlation with liquid fragility become clearer.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.
2012-08-01
We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
Hydrodynamic Aspects of Particle Clogging in Porous Media
MAYS, DAVID C.; HUNT, JAMES R.
2010-01-01
Data from 6 filtration studies, representing 43 experiments, are analyzed with a simplified version of the single-parameter O’Melia and Ali clogging model. The model parameter displays a systematic dependence on fluid velocity, which was an independent variable in each study. A cake filtration model also explains the data from one filtration study by varying a single, velocity-dependent parameter, highlighting that clogging models, because they are empirical, are not unique. Limited experimental data indicate exponential depth dependence of particle accumulation, whose impact on clogging is quantified with an extended O’Melia and Ali model. The resulting two-parameter model successfully describes the increased clogging that is always observed in the top segment of a filter. However, even after accounting for particle penetration, the two-parameter model suggests that a velocity-dependent parameter representing deposit morphology must also be included to explain the data. Most of the experimental data are described by the single-parameter O’Melia and Ali model, and the model parameter is correlated to the collector Peclet number. PMID:15707058
Quantum Walk Schemes for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Underwood, Michael S.
Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.
NASA Astrophysics Data System (ADS)
Garcia, O. E.; Kube, R.; Theodorsen, A.; LaBombard, B.; Terry, J. L.
2018-05-01
Plasma fluctuations in the scrape-off layer of the Alcator C-Mod tokamak in ohmic and high confinement modes have been analyzed using gas puff imaging data. In all cases investigated, the time series of emission from a single spatially resolved view into the gas puff are dominated by large-amplitude bursts, attributed to blob-like filament structures moving radially outwards and poloidally. There is a remarkable similarity of the fluctuation statistics in ohmic plasmas and in edge localized mode-free and enhanced D-alpha high confinement mode plasmas. Conditionally averaged waveforms have a two-sided exponential shape with comparable temporal scales and asymmetry, while the burst amplitudes and the waiting times between them are exponentially distributed. The probability density functions and the frequency power spectral densities are similar for all these confinement modes. These results provide strong evidence in support of a stochastic model describing the plasma fluctuations in the scrape-off layer as a super-position of uncorrelated exponential pulses. Predictions of this model are in excellent agreement with experimental measurements in both ohmic and high confinement mode plasmas. The stochastic model thus provides a valuable tool for predicting fluctuation-induced plasma-wall interactions in magnetically confined fusion plasmas.
Characteristics of single Ca(2+) channel kinetics in feline hypertrophied ventricular myocytes.
Yang, Xiangjun; Hui, Jie; Jiang, Tingbo; Song, Jianping; Liu, Zhihua; Jiang, Wenping
2002-04-01
To explore the mechanism underlying the prolongation of action potential and delayed inactivation of the L-type Ca(2+) (I(Ca, L)) current in a feline model of left ventricular system hypertension and concomitant hypertrophy. Single Ca(2+) channel properties in myocytes isolated from normal and pressure overloaded cat left ventricles were studied, using patch-clamp techniques. Left ventricular pressure overload was induced by partial ligation of the ascending aorta for 4 - 6 weeks. The amplitude of single Ca(2+) channel current evoked by depolarizing pulses from -40 mV to 0 mV was 1.02 +/- 0.03 pA in normal cells and 1.05 +/- 0.03 pA in hypertrophied cells, and there was no difference in single channel current-voltage relationships between the groups since slope conductance was 26.2 +/- 1.0 pS in normal and hypertrophied cells, respectively. Peak amplitudes of the ensemble-averaged single Ca(2+) channel currents were not different between the two groups of cells. However, the amplitude of this averaged current at the end of the clamp pulse was significantly larger in hypertrophied cells than in normal cells. Open-time histograms revealed that open-time distribution was fitted by a single exponential function in channels of normal cells and by a two exponential function in channels of hypertrophied cells. The number of long-lasting openings was increased in channels of hypertrophied cells, and therefore the calculated mean open time of the channel was significantly longer compared to normal controls. Kinetic changes in the Ca(2+) channel may underlie both hypertrophy-associated delayed inactivation of the Ca(2+) current and, in part, the pressure overload-induced action potential lengthening in this cat model of ventricular left systolic hypertension and hypertrophy.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
A computer program for thermal radiation from gaseous rocket exhuast plumes (GASRAD)
NASA Technical Reports Server (NTRS)
Reardon, J. E.; Lee, Y. C.
1979-01-01
A computer code is presented for predicting incident thermal radiation from defined plume gas properties in either axisymmetric or cylindrical coordinate systems. The radiation model is a statistical band model for exponential line strength distribution with Lorentz/Doppler line shapes for 5 gaseous species (H2O, CO2, CO, HCl and HF) and an appoximate (non-scattering) treatment of carbon particles. The Curtis-Godson approximation is used for inhomogeneous gases, but a subroutine is available for using Young's intuitive derivative method for H2O with Lorentz line shape and exponentially-tailed-inverse line strength distribution. The geometry model provides integration over a hemisphere with up to 6 individually oriented identical axisymmetric plumes, a single 3-D plume, Shading surfaces may be used in any of 7 shapes, and a conical limit may be defined for the plume to set individual line-of-signt limits. Intermediate coordinate systems may specified to simplify input of plumes and shading surfaces.
Beyond the usual mapping functions in GPS, VLBI and Deep Space tracking.
NASA Astrophysics Data System (ADS)
Barriot, Jean-Pierre; Serafini, Jonathan; Sichoix, Lydie
2014-05-01
We describe here a new algorithm to model the water contents of the atmosphere (including ZWD) from GPS slant wet delays relative to a single receiver. We first make the assumption that the water vapor contents are mainly governed by a scale height (exponential law), and secondly that the departures from this decaying exponential can be mapped as a set of low degree 3D Zernike functions (w.r.t. space) and Tchebyshev polynomials (w.r.t. time.) We compare this new algorithm with previous algorithms known as mapping functions in GPS, VLBI and Deep Space tracking and give an example with data acquired over a one day time span at the Geodesy Observatory of Tahiti.
Theory and procedures for finding a correct kinetic model for the bacteriorhodopsin photocycle.
Hendler, R W; Shrager, R; Bose, S
2001-04-26
In this paper, we present the implementation and results of new methodology based on linear algebra. The theory behind these methods is covered in detail in the Supporting Information, available electronically (Shragerand Hendler). In brief, the methods presented search through all possible forward sequential submodels in order to find candidates that can be used to construct a complete model for the BR-photocycle. The methodology is limited only to forward sequential models. If no such models are compatible with the experimental data,none will be found. The procedures apply objective tests and filters to eliminate possibilities that cannot be correct, thus cutting the total number of candidate sequences to be considered. In the current application,which uses six exponentials, the total sequences were cut from 1950 to 49. The remaining sequences were further screened using known experimental criteria. The approach led to a solution which consists of a pair of sequences, one with 5 exponentials showing BR* f L(f) M(f) N O BR and the other with three exponentials showing BR* L(s) M(s) BR. The deduced complete kinetic model for the BR photocycle is thus either a single photocycle branched at the L intermediate or a pair of two parallel photocycles. Reasons for preferring the parallel photocycles are presented. Synthetic data constructed on the basis of the parallel photocycles were indistinguishable from the experimental data in a number of analytical tests that were applied.
Fluorescence quenching by TEMPO: a sub-30 A single-molecule ruler.
Zhu, Peizhi; Clamme, Jean-Pierre; Deniz, Ashok A
2005-11-01
A series of DNA molecules labeled with 5-carboxytetramethylrhodamine (5-TAMRA) and the small nitroxide radical TEMPO were synthesized and tested to investigate whether the intramolecular quenching efficiency can be used to measure short intramolecular distances in small ensemble and single-molecule experiments. In combination with distance calculations using molecular mechanics modeling, the experimental results from steady-state ensemble fluorescence and fluorescence correlation spectroscopy measurements both show an exponential decrease in the quenching rate constant with the dye-quencher distance in the 10-30 A range. The results demonstrate that TEMPO-5-TAMRA fluorescence quenching is a promising method to measure short distance changes within single biomolecules.
Biological adaptive control model: a mechanical analogue of multi-factorial bone density adaptation.
Davidson, Peter L; Milburn, Peter D; Wilson, Barry D
2004-03-21
The mechanism of how bone adapts to every day demands needs to be better understood to gain insight into situations in which the musculoskeletal system is perturbed. This paper offers a novel multi-factorial mathematical model of bone density adaptation which combines previous single-factor models in a single adaptation system as a means of gaining this insight. Unique aspects of the model include provision for interaction between factors and an estimation of the relative contribution of each factor. This interacting system is considered analogous to a Newtonian mechanical system and the governing response equation is derived as a linear version of the adaptation process. The transient solution to sudden environmental change is found to be exponential or oscillatory depending on the balance between cellular activation and deactivation frequencies.
NASA Technical Reports Server (NTRS)
Raj, S. V.; Pharr, G. M.
1989-01-01
Creep tests conducted on NaCl single crystals in the temperature range from 373 to 1023 K show that true steady state creep is obtained only above 873 K when the ratio of the applied stress to the shear modulus is less than or equal to 0.0001. Under other stress and temperature conditions, corresponding to both power law and exponential creep, the creep rate decreases monotonically with increasing strain. The transition from power law to exponential creep is shown to be associated with increases in the dislocation density, the cell boundary width, and the aspect ratio of the subgrains along the primary slip planes. The relation between dislocation structure and creep behavior is also assessed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.
A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets
Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda
2013-01-01
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
NASA Astrophysics Data System (ADS)
Dalkilic, Turkan Erbay; Apaydin, Aysen
2009-11-01
In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
Spectrum analysis of radar life signal in the three kinds of theoretical models
NASA Astrophysics Data System (ADS)
Yang, X. F.; Ma, J. F.; Wang, D.
2017-02-01
In the single frequency continuous wave radar life detection system, based on the Doppler effect, the theory model of radar life signal is expressed by the real function, and there is a phenomenon that can't be confirmed by the experiment. When the phase generated by the distance between the measured object and the radar measuring head is л of integer times, the main frequency spectrum of life signal (respiration and heartbeat) is not existed in radar life signal. If this phase is л/2 of odd times, the main frequency spectrum of breath and heartbeat frequency is the strongest. In this paper, we use the Doppler effect as the basic theory, using three different mathematical expressions——real function, complex exponential function and Bessel's function expansion form. They are used to establish the theoretical model of radar life signal. Simulation analysis revealed that the Bessel expansion form theoretical model solve the problem of real function form. Compared with the theoretical model of the complex exponential function, the derived spectral line is greatly reduced in the theoretical model of Bessel expansion form, which is more consistent with the actual situation.
Two Mechanisms Determine Quantum Dot Blinking.
Yuan, Gangcheng; Gómez, Daniel E; Kirkwood, Nicholas; Boldt, Klaus; Mulvaney, Paul
2018-04-24
Many potential applications of quantum dots (QDs) can only be realized once the luminescence from single nanocrystals (NCs) is understood. These applications include the development of quantum logic devices, single-photon sources, long-life LEDs, and single-molecule biolabels. At the single-nanocrystal level, random fluctuations in the QD photoluminescence occur, a phenomenon termed blinking. There are two competing models to explain this blinking: Auger recombination and surface trap induced recombination. Here we use lifetime scaling on core-shell chalcogenide NCs to demonstrate that both types of blinking occur in the same QDs. We prove that Auger-blinking can yield single-exponential on/off times in contrast to earlier work. The surface passivation strategy determines which blinking mechanism dominates. This study summarizes earlier studies on blinking mechanisms and provides some clues that stable single QDs can be engineered for optoelectronic applications.
Nilsen, Vegard; Wyller, John
2016-01-01
Dose-response models are essential to quantitative microbial risk assessment (QMRA), providing a link between levels of human exposure to pathogens and the probability of negative health outcomes. In drinking water studies, the class of semi-mechanistic models known as single-hit models, such as the exponential and the exact beta-Poisson, has seen widespread use. In this work, an attempt is made to carefully develop the general mathematical single-hit framework while explicitly accounting for variation in (1) host susceptibility and (2) pathogen infectivity. This allows a precise interpretation of the so-called single-hit probability and precise identification of a set of statistical independence assumptions that are sufficient to arrive at single-hit models. Further analysis of the model framework is facilitated by formulating the single-hit models compactly using probability generating and moment generating functions. Among the more practically relevant conclusions drawn are: (1) for any dose distribution, variation in host susceptibility always reduces the single-hit risk compared to a constant host susceptibility (assuming equal mean susceptibilities), (2) the model-consistent representation of complete host immunity is formally demonstrated to be a simple scaling of the response, (3) the model-consistent expression for the total risk from repeated exposures deviates (gives lower risk) from the conventional expression used in applications, and (4) a model-consistent expression for the mean per-exposure dose that produces the correct total risk from repeated exposures is developed. © 2016 Society for Risk Analysis.
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
Fluorescence Quenching by TEMPO: A Sub-30 Å Single-Molecule Ruler
Zhu, Peizhi; Clamme, Jean-Pierre; Deniz, Ashok A.
2005-01-01
A series of DNA molecules labeled with 5-carboxytetramethylrhodamine (5-TAMRA) and the small nitroxide radical TEMPO were synthesized and tested to investigate whether the intramolecular quenching efficiency can be used to measure short intramolecular distances in small ensemble and single-molecule experiments. In combination with distance calculations using molecular mechanics modeling, the experimental results from steady-state ensemble fluorescence and fluorescence correlation spectroscopy measurements both show an exponential decrease in the quenching rate constant with the dye-quencher distance in the 10–30 Å range. The results demonstrate that TEMPO-5-TAMRA fluorescence quenching is a promising method to measure short distance changes within single biomolecules. PMID:16199509
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Coherent energy scale revealed by ultrafast dynamics of UX3 (X = Al, Sn, Ga) single crystals
NASA Astrophysics Data System (ADS)
Nair, Saritha K.; Zhu, J.-X.; Sarrao, J. L.; Taylor, A. J.; Chia, Elbert E. M.
2012-09-01
The temperature dependence of relaxation dynamics of UX3 (X = Al, Ga, Sn) compounds is studied using the time-resolved pump-probe technique in reflectance geometry. For UGa3, our data are consistent with the formation of a spin density wave gap as evidenced from the quasidivergence of the relaxation time τ near the Néel temperature TN. For UAl3 and USn3, the relaxation dynamics shows a change from single-exponential to two-exponential behavior below a particular temperature, suggestive of coherence formation of the 5f electrons with the conduction band electrons. This particular temperature can be attributed to the spin fluctuation temperature Tsf, a measure of the strength of Kondo coherence. Our Tsf is consistent with other data such as resistivity and susceptibility measurements. The temperature dependence of the relaxation amplitude and time of UAl3 and USn3 were also fitted by the Rothwarf-Taylor model. Our results show that ultrafast optical spectroscopy is sensitive to c-f Kondo hybridization in the f-electron systems.
Numerical renormalization group method for entanglement negativity at finite temperature
NASA Astrophysics Data System (ADS)
Shim, Jeongmin; Sim, H.-S.; Lee, Seung-Sup B.
2018-04-01
We develop a numerical method to compute the negativity, an entanglement measure for mixed states, between the impurity and the bath in quantum impurity systems at finite temperature. We construct a thermal density matrix by using the numerical renormalization group (NRG), and evaluate the negativity by implementing the NRG approximation that reduces computational cost exponentially. We apply the method to the single-impurity Kondo model and the single-impurity Anderson model. In the Kondo model, the negativity exhibits a power-law scaling at temperature much lower than the Kondo temperature and a sudden death at high temperature. In the Anderson model, the charge fluctuation of the impurity contributes to the negativity even at zero temperature when the on-site Coulomb repulsion of the impurity is finite, while at low temperature the negativity between the impurity spin and the bath exhibits the same power-law scaling behavior as in the Kondo model.
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Hosseinzadeh, M; Ghoreishi, M; Narooei, K
2016-06-01
In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tachyon inflation in the large-N formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel
2015-11-01
We study tachyon inflation within the large-N formalism, which takes a prescription for the small Hubble flow slow-roll parameter ε{sub 1} as a function of the large number of e-folds N. This leads to a classification of models through their behaviour at large N. In addition to the perturbative N class, we introduce the polynomial and exponential classes for the ε{sub 1} parameter. With this formalism we reconstruct a large number of potentials used previously in the literature for tachyon inflation. We also obtain new families of potentials from the polynomial class. We characterize the realizations of tachyon inflation bymore » computing the usual cosmological observables up to second order in the Hubble flow slow-roll parameters. This allows us to look at observable differences between tachyon and canonical single field inflation. The analysis of observables in light of the Planck 2015 data shows the viability of some of these models, mostly for certain realization of the polynomial and exponential classes.« less
Theory of Thermal Relaxation of Electrons in Semiconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadasivam, Sridhar; Chan, Maria K. Y.; Darancet, Pierre
2017-09-01
We compute the transient dynamics of phonons in contact with high energy ``hot'' charge carriers in 12 polar and non-polar semiconductors, using a first-principles Boltzmann transport framework. For most materials, we find that the decay in electronic temperature departs significantly from a single-exponential model at times ranging from 1 ps to 15 ps after electronic excitation, a phenomenon concomitant with the appearance of non-thermal vibrational modes. We demonstrate that these effects result from the slow thermalization within the phonon subsystem, caused by the large heterogeneity in the timescales of electron-phonon and phonon-phonon interactions in these materials. We propose a generalizedmore » 2-temperature model accounting for the phonon thermalization as a limiting step of electron-phonon thermalization, which captures the full thermal relaxation of hot electrons and holes in semiconductors. A direct consequence of our findings is that, for semiconductors, information about the spectral distribution of electron-phonon and phonon-phonon coupling can be extracted from the multi-exponential behavior of the electronic temperature.« less
Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
1986-01-01
Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)
NASA Astrophysics Data System (ADS)
Laws, William R.; Ross, J. B. Alexander
1992-04-01
The time-resolved fluorescence properties of a tryptophan residue should be useful for probing protein structure, function, and dynamics. To date, however, the non-single exponential fluorescence intensity decay kinetics for numerous peptides and proteins having a single tryptophan residue have not been adequately explained. Many possibilities have been considered and include: (1) contributions from the 1La and 1Lb states of indole; (2) excited-state hydrogen exchange; and (3) environmental heterogeneity from (chi) 1 and (chi) 2 rotamers. In addition, it has been suggested that generally many factors contribute to the decay and a distribution of probabilities may be more appropriate. Two recent results support multiple species due to conformational heterogeneity as the major contributor to complex kinetics. First, a rotationally constrained tryptophan analogue has fluorescence intensity decay kinetics that can be described by the sum of two exponentials with amplitudes comparable to the relative populations of the two rotational isomers. Second, the multiple exponentials observed for tyrosine-containing model compounds and peptides correlate with the (chi) 1 rotamer populations independently determined by 1H NMR. We now report similar correlations between rotamer populations and fluorescence intensity decay kinetics for a tryptophan analogue of oxytocin. It appears for this compound that either (chi) 2 rotations do not appreciably alter the indole environment, (chi) 2 rotations are rapid enough to average the observed dependence, or only one of two possible (chi) 2 populations is associated with each (chi) 1 rotamer.
A Stochastic Super-Exponential Growth Model for Population Dynamics
NASA Astrophysics Data System (ADS)
Avila, P.; Rekker, A.
2010-11-01
A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.
NASA Astrophysics Data System (ADS)
Stepanov, F. I.
2018-04-01
The mechanical properties of a material which is modeled by an exponential creep kernel characterized by a spectrum of relaxation and retardation times are studied. The research is carried out considering a contact problem for a solid indenter sliding over a viscoelastic half-space. The contact pressure, indentation depth of the indenter, and the deformation component of the friction coefficient are analyzed with respect to the case of half-space material modeled by single relaxation and retardation times.
Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.
2016-01-01
Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
Possible stretched exponential parametrization for humidity absorption in polymers.
Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O
2009-04-01
Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.
VO2 Off Transient Kinetics in Extreme Intensity Swimming.
Sousa, Ana; Figueiredo, Pedro; Keskinen, Kari L; Rodríguez, Ferran A; Machado, Leandro; Vilas-Boas, João P; Fernandes, Ricardo J
2011-01-01
Inconsistencies about dynamic asymmetry between the on- and off- transient responses in oxygen uptake are found in the literature. Therefore, the purpose of this study was to characterize the oxygen uptake off-transient kinetics during a maximal 200-m front crawl effort, as examining the degree to which the on/off regularity of the oxygen uptake kinetics response was preserved. Eight high level male swimmers performed a 200-m front crawl at maximal speed during which oxygen uptake was directly measured through breath-by-breath oxymetry (averaged every 5 s). This apparatus was connected to the swimmer by a low hydrodynamic resistance respiratory snorkel and valve system. The on- and off-transient phases were symmetrical in shape (mirror image) once they were adequately fitted by a single-exponential regression models, and no slow component for the oxygen uptake response was developed. Mean (± SD) peak oxygen uptake was 69.0 (± 6.3) mL·kg(-1)·min(-1), significantly correlated with time constant of the off- transient period (r = 0.76, p < 0.05) but not with any of the other oxygen off-transient kinetic parameters studied. A direct relationship between time constant of the off-transient period and mean swimming speed of the 200-m (r = 0.77, p < 0.05), and with the amplitude of the fast component of the effort period (r = 0.72, p < 0.05) were observed. The mean amplitude and time constant of the off-transient period values were significantly greater than the respective on- transient. In conclusion, although an asymmetry between the on- and off kinetic parameters was verified, both the 200-m effort and the respectively recovery period were better characterized by a single exponential regression model. Key pointsThe VO2 slow component was not observed in the recovery period of swimming extreme efforts;The on and off transient periods were better fitted by a single exponential function, and so, these effort and recovery periods of swimming extreme efforts are symmetrical;The rate of VO2 decline during the recovery period may be due to not only the magnitude of oxygen debt but also the VO2peak obtained during the effort period.
Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.
Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T
2010-03-10
Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.
Vibronic relaxation dynamics of o-dichlorobenzene in its lowest excited singlet state
NASA Astrophysics Data System (ADS)
Liu, Benkang; Zhao, Haiyan; Lin, Xiang; Li, Xinxin; Gao, Mengmeng; Wang, Li; Wang, Wei
2018-01-01
Vibronic dynamics of o-dichlorobenzene in its lowest excited singlet state, S1, is investigated in real time by using femtosecond pump-probe method, combined with time-of-flight mass spectroscopy and photoelectron velocity mapping technique. Relaxation processes for the excitation in the range of 276-252 nm can be fitted by single exponential decay model, while in the case of wavelength shorter than 252 nm two-exponential decay model must be adopted for simulating transient profiles. Lifetime constants of the vibrationally excited S1 states change from 651 ± 10 ps for 276 nm excitation to 61 ± 1 ps for 242 nm excitation. Both the internal conversion from the S1 to the highly vibrationally excited ground state S0 and the intersystem crossing from the S1 to the triplet state are supposed to play important roles in de-excitation processes. Exponential fitting of the de-excitation rates on the excitation energy implies such de-excitation process starts from the highly vibrationally excited S0 state, which is validated, by probing the relaxation following photoexcitation at 281 nm, below the S1 origin. Time-dependent photoelectron kinetic energy distributions have been obtained experimentally. As the excitation wavelength changes from 276 nm to 242 nm, different cationic vibronic vibrations can be populated, determined by the Franck-Condon factors between the large geometry distorted excited singlet states and final cationic states.
A new OLED SPICE model for pixel circuit simulation in OLED-on-silicon microdisplay design
NASA Astrophysics Data System (ADS)
Bohua, Zhao; Ran, Huang; Jianhui, Bu; Yinxue, Lü; Yiqi, Wang; Fei, Ma; Guohua, Xie; Zhensong, Zhang; Huan, Du; Jiajun, Luo; Zhengsheng, Han; Yi, Zhao
2012-07-01
A new equivalent circuit model of organic-light-emitting-diode (OLED) is proposed. As the single-diode model is able to approximate OLED behavior as well as the multiple-diode model, the new model will be built based on it. In order to make sure that the experimental and simulated data are in good agreement, the constant resistor is exchanged for an exponential resistor in the new model. Compared with the measured data and the results of the other two OLED SPICE models, the simulated I—V characteristics of the new model match the measured data much better. This new model can be directly incorporated into an SPICE circuit simulator and presents good accuracy over the whole operating voltage.
NASA Astrophysics Data System (ADS)
Khadzhai, G. Ya.; Vovk, R. V.; Vovk, N. R.; Kamchatnaya, S. N.; Dobrovolskiy, O. V.
2018-02-01
We reveal that the temperature dependence of the basal-plane normal-state electrical resistance of optimally doped YBa2Cu3O7-δ single crystals can be with great accuracy approximated within the framework of the model of s-d electron-phonon scattering. This requires taking into account the fluctuation conductivity whose contribution exponentially increases with decreasing temperature and decreases with an increase of oxygen deficiency. Room-temperature annealing improves the sample and, thus, increases the superconducting transition temperature. The temperature of the 2D-3D crossover decreases during annealing.
Gutting, Bradford W; Rukhin, Andrey; Mackie, Ryan S; Marchette, David; Thran, Brandolyn
2015-05-01
The application of the exponential model is extended by the inclusion of new nonhuman primate (NHP), rabbit, and guinea pig dose-lethality data for inhalation anthrax. Because deposition is a critical step in the initiation of inhalation anthrax, inhaled doses may not provide the most accurate cross-species comparison. For this reason, species-specific deposition factors were derived to translate inhaled dose to deposited dose. Four NHP, three rabbit, and two guinea pig data sets were utilized. Results from species-specific pooling analysis suggested all four NHP data sets could be pooled into a single NHP data set, which was also true for the rabbit and guinea pig data sets. The three species-specific pooled data sets could not be combined into a single generic mammalian data set. For inhaled dose, NHPs were the most sensitive (relative lowest LD50) species and rabbits the least. Improved inhaled LD50 s proposed for use in risk assessment are 50,600, 102,600, and 70,800 inhaled spores for NHP, rabbit, and guinea pig, respectively. Lung deposition factors were estimated for each species using published deposition data from Bacillus spore exposures, particle deposition studies, and computer modeling. Deposition was estimated at 22%, 9%, and 30% of the inhaled dose for NHP, rabbit, and guinea pig, respectively. When the inhaled dose was adjusted to reflect deposited dose, the rabbit animal model appears the most sensitive with the guinea pig the least sensitive species. © 2014 Society for Risk Analysis.
Rasmussen, Erin B; Reilly, William; Buckley, Jessica; Boomhower, Steven R
2012-02-01
Research on free-food intake suggests that cannabinoids are implicated in the regulation of feeding. Few studies, however, have characterized how environmental factors that affect food procurement interact with cannabinoid drugs that reduce food intake. Demand analysis provides a framework to understand how cannabinoid blockers, such as rimonabant, interact with effort in reducing demand for food. The present study examined the effects rimonabant had on demand for sucrose in obese Zucker rats when effort to obtain food varied and characterized the data using the exponential ("essential value") model of demand. Twenty-nine male (15 lean, 14 obese) Zucker rats lever-pressed under eight fixed ratio (FR) schedules of sucrose reinforcement, in which the number of lever-presses to gain access to a single sucrose pellet varied between 1 and 300. After behavior stabilized under each FR schedule, acute doses of rimonabant (1-10mg/kg) were administered prior to some sessions. The number of food reinforcers and responses in each condition was averaged and the exponential and linear demand equations were fit to the data. These demand equations quantify the value of a reinforcer by its sensitivity to price (FR) increases. Under vehicle conditions, obese Zucker rats consumed more sucrose pellets than leans at smaller fixed ratios; however, they were equally sensitive to price increases with both models of demand. Rimonabant dose-dependently reduced reinforcers and responses for lean and obese rats across all FR schedules. Data from the exponential analysis suggest that rimonabant dose-dependently increased elasticity, i.e., reduced the essential value of sucrose, a finding that is consistent with graphical depictions of normalized demand curves. Copyright © 2011 Elsevier Inc. All rights reserved.
Marcotte, Christopher D; Grigoriev, Roman O
2016-09-01
This paper introduces a numerical method for computing the spectrum of adjoint (left) eigenfunctions of spiral wave solutions to reaction-diffusion systems in arbitrary geometries. The method is illustrated by computing over a hundred eigenfunctions associated with an unstable time-periodic single-spiral solution of the Karma model on a square domain. We show that all leading adjoint eigenfunctions are exponentially localized in the vicinity of the spiral tip, although the marginal modes (response functions) demonstrate the strongest localization. We also discuss the implications of the localization for the dynamics and control of unstable spiral waves. In particular, the interaction with no-flux boundaries leads to a drift of spiral waves which can be understood with the help of the response functions.
Sozanski, Krzysztof; Wisniewska, Agnieszka; Kalwarczyk, Tomasz; Sznajder, Anna; Holyst, Robert
2016-01-01
We investigate transport properties of model polyelectrolyte systems at physiological ionic strength (0.154 M). Covering a broad range of flow length scales—from diffusion of molecular probes to macroscopic viscous flow—we establish a single, continuous function describing the scale dependent viscosity of high-salt polyelectrolyte solutions. The data are consistent with the model developed previously for electrically neutral polymers in a good solvent. The presented approach merges the power-law scaling concepts of de Gennes with the idea of exponential length scale dependence of effective viscosity in complex liquids. The result is a simple and applicable description of transport properties of high-salt polyelectrolyte solutions at all length scales, valid for motion of single molecules as well as macroscopic flow of the complex liquid. PMID:27536866
Numerical and Experimental Validation of a New Damage Initiation Criterion
NASA Astrophysics Data System (ADS)
Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.
2017-09-01
Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.
Droplet localization in the random XXZ model and its manifestations
NASA Astrophysics Data System (ADS)
Elgart, A.; Klein, A.; Stolz, G.
2018-01-01
We examine many-body localization properties for the eigenstates that lie in the droplet sector of the random-field spin- \\frac 1 2 XXZ chain. These states satisfy a basic single cluster localization property (SCLP), derived in Elgart et al (2018 J. Funct. Anal. (in press)). This leads to many consequences, including dynamical exponential clustering, non-spreading of information under the time evolution, and a zero velocity Lieb-Robinson bound. Since SCLP is only applicable to the droplet sector, our definitions and proofs do not rely on knowledge of the spectral and dynamical characteristics of the model outside this regime. Rather, to allow for a possible mobility transition, we adapt the notion of restricting the Hamiltonian to an energy window from the single particle setting to the many body context.
NASA Astrophysics Data System (ADS)
Marcotte, Christopher D.; Grigoriev, Roman O.
2016-09-01
This paper introduces a numerical method for computing the spectrum of adjoint (left) eigenfunctions of spiral wave solutions to reaction-diffusion systems in arbitrary geometries. The method is illustrated by computing over a hundred eigenfunctions associated with an unstable time-periodic single-spiral solution of the Karma model on a square domain. We show that all leading adjoint eigenfunctions are exponentially localized in the vicinity of the spiral tip, although the marginal modes (response functions) demonstrate the strongest localization. We also discuss the implications of the localization for the dynamics and control of unstable spiral waves. In particular, the interaction with no-flux boundaries leads to a drift of spiral waves which can be understood with the help of the response functions.
NASA Technical Reports Server (NTRS)
Curreri, Peter A.
2010-01-01
Two contemporary issues foretell a shift from our historical Earth based industrial economy and habitation to a solar system based society. The first is the limits to Earth's carrying capacity, that is the maximum number of people that the Earth can support before a catastrophic impact to the health of the planet and human species occurs. The simple example of carrying capacity is that of a bacterial colony in a Petri dish with a limited amount of nutrient. The colony experiences exponential population growth until the carrying capacity is reached after which catastrophic depopulation often results. Estimates of the Earth s carrying capacity vary between 14 and 40 billion people. Although at current population growth rates we may have over a century before we reach Earth s carrying limit our influence on climate and resources on the planetary scale is becoming scientifically established. The second issue is the exponential growth of knowledge and technological power. The exponential growth of technology interacts with the exponential growth of population in a manner that is unique to a highly intelligent species. Thus, the predicted consequences (world famines etc.) of the limits to growth have been largely avoided due to technological advances. However, at the mid twentieth century a critical coincidence occurred in these two trends humanity obtained the technological ability to extinguish life on the planetary scale (by nuclear, chemical, biological means) and attained the ability to expand human life beyond Earth. This paper examines an optimized O Neill/Glaser model (O Neill 1975; Curreri 2007; Detweiler and Curreri 2008) for the economic human population of space. Critical to this model is the utilization of extraterrestrial resources, solar power and spaced based labor. A simple statistical analysis is then performed which predicts the robustness of a single planet based technological society versus that of multiple world (independent habitats) society.
NASA Technical Reports Server (NTRS)
Curreri, Peter A.
2010-01-01
Two contemporary issues foretell a shift from our historical Earth based industrial economy and habitation to a solar system based society. The first is the limits to Earth s carrying capacity, that is the maximum number of people that the Earth can support before a catastrophic impact to the health of the planet and human species occurs. The simple example of carrying capacity is that of a bacterial colony in a Petri dish with a limited amount of nutrient. The colony experiences exponential population growth until the carrying capacity is reached after which catastrophic depopulation often results. Estimates of the Earth s carrying capacity vary between 14 and 40 billion people. Although at current population growth rates we may have over a century before we reach Earth s carrying limit our influence on climate and resources on the planetary scale is becoming scientifically established. The second issue is the exponential growth of knowledge and technological power. The exponential growth of technology interacts with the exponential growth of population in a manner that is unique to a highly intelligent species. Thus, the predicted consequences (world famines etc.) of the limits to growth have been largely avoided due to technological advances. However, at the mid twentieth century a critical coincidence occurred in these two trends humanity obtained the technological ability to extinguish life on the planetary scale (by nuclear, chemical, biological means) and attained the ability to expand human life beyond Earth. This paper examines an optimized O Neill/Glaser model (O Neill 1975; Curreri 2007; Detweiler and Curreri 2008) for the economic human population of space. Critical to this model is the utilization of extraterrestrial resources, solar power and spaced based labor. A simple statistical analysis is then performed which predicts the robustness of a single planet based technological society versus that of multiple world (independent habitats) society.
NASA Technical Reports Server (NTRS)
Curreri, Peter A.
2010-01-01
Two contemporary issues foretell a shift from our historical Earth based industrial economy and habitation to a solar system based society. The first is the limits to Earth s carrying capacity, that is the maximum number of people that the Earth can support before a catastrophic impact to the health of the planet and human species occurs. The simple example of carrying capacity is that of a bacterial colony in a Petri dish with a limited amount of nutrient. The colony experiences exponential population growth until the carrying capacity is reached after which catastrophic depopulation often results. Estimates of the Earth s carrying capacity vary between 14 and 40 billion people. Although at current population growth rates we may have over a century before we reach Earth s carrying limit our influence on climate and resources on the planetary scale is becoming scientifically established. The second issue is the exponential growth of knowledge and technological power. The exponential growth of technology interacts with the exponential growth of population in a manner that is unique to a highly intelligent species. Thus, the predicted consequences (world famines etc.) of the limits to growth have been largely avoided due to technological advances. However, at the mid twentieth century a critical coincidence occurred in these two trends humanity obtained the technological ability to extinguish life on the planetary scale (by nuclear, chemical, biological means) and attained the ability to expand human life beyond Earth. This paper examines an optimized O'Neill/Glaser model (O Neill 1975; Curreri 2007; Detweiler and Curreri 2008) for the economic human population of space. Critical to this model is the utilization of extraterrestrial resources, solar power and spaced based labor. A simple statistical analysis is then performed which predicts the robustness of a single planet based technological society versus that of multiple world (independent habitats) society.
Acetylcholine-induced current in perfused rat myoballs
1980-01-01
Spherical "myoballs" were grown under tissue culture conditions from striated muscle of neonatal rat thighs. The myoballs were examined electrophysiologically with a suction pipette which was used to pass current and perfuse internally. A microelectrode was used to record membrane potential. Experiments were performed with approximately symmetrical (intracellular and extracellular) sodium aspartate solutions. The resting potential, acetylcholine (ACh) reversal potential, and sodium channel reversal potential were all approximately 0 mV. ACh-induced currents were examined by use of both voltage jumps and voltage ramps in the presence of iontophoretically applied agonist. The voltage-jump relaxations had a single exponential time-course. The time constant, tau, was exponentially related to membrane potential, increasing e-fold for 81 mV hyperpolarization. The equilibrium current- voltage relationship was also approximately exponential, from -120 to +81 mV, increasing e-fold for 104 mV hyperpolarization. The data are consistent with a first-order gating process in which the channel opening rate constant is slightly voltage dependent. The instantaneous current-voltage relationship was sublinear in the hyperpolarizing direction. Several models are discussed which can account for the nonlinearity. Evidence is presented that the "selectivity filter" for the ACh channel is located near the intracellular membrane surface. PMID:7381423
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
Gabriel, Jan; Petrov, Oleg V; Kim, Youngsik; Martin, Steve W; Vogel, Michael
2015-09-01
We use (7)Li NMR to study the ionic jump motion in ternary 0.5Li2S+0.5[(1-x)GeS2+xGeO2] glassy lithium ion conductors. Exploring the "mixed glass former effect" in this system led to the assumption of a homogeneous and random variation of diffusion barriers in this system. We exploit that combining traditional line-shape analysis with novel field-cycling relaxometry, it is possible to measure the spectral density of the ionic jump motion in broad frequency and temperature ranges and, thus, to determine the distribution of activation energies. Two models are employed to parameterize the (7)Li NMR data, namely, the multi-exponential autocorrelation function model and the power-law waiting times model. Careful evaluation of both of these models indicates a broadly inhomogeneous energy landscape for both the single (x=0.0) and the mixed (x=0.1) network former glasses. The multi-exponential autocorrelation function model can be well described by a Gaussian distribution of activation barriers. Applicability of the methods used and their sensitivity to microscopic details of ionic motion are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Science and Facebook: The same popularity law!
Néda, Zoltán; Varga, Levente; Biró, Tamás S
2017-01-01
The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of "shares" for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4.
Exponentially damped Lévy flights, multiscaling, and exchange rates
NASA Astrophysics Data System (ADS)
Matsushita, Raul; Gleria, Iram; Figueiredo, Annibal; Rathie, Pushpa; Da Silva, Sergio
2004-02-01
We employ our previously suggested exponentially damped Lévy flight (Physica A 326 (2003) 544) to study the multiscaling properties of 30 daily exchange rates against the US dollar together with a fictitious euro-dollar rate (Physica A 286 (2000) 353). Though multiscaling is not theoretically seen in either stable Lévy processes or abruptly truncated Lévy flights, it is even characteristic of smoothly truncated Lévy flights (Phys. Lett. A 266 (2000) 282; Eur. Phys. J. B 4 (1998) 143). We have already defined a class of “quasi-stable” processes in connection with the finding that single scaling is pervasive among the dollar price of foreign currencies (Physica A 323 (2003) 601). Here we show that the same goes as far as multiscaling is concerned. Our novel findings incidentally reinforce the case for real-world relevance of the Lévy flights for modeling financial prices.
Quantum ensembles of quantum classifiers.
Schuld, Maria; Petruccione, Francesco
2018-02-09
Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.
Science and Facebook: The same popularity law!
Varga, Levente; Biró, Tamás S.
2017-01-01
The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of “shares” for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4. PMID:28678796
Self Organized Criticality as a new paradigm of sleep regulation
NASA Astrophysics Data System (ADS)
Ivanov, Plamen Ch.; Bartsch, Ronny P.
2012-02-01
Humans and animals often exhibit brief awakenings from sleep (arousals), which are traditionally viewed as random disruptions of sleep caused by external stimuli or pathologic perturbations. However, our recent findings show that arousals exhibit complex temporal organization and scale-invariant behavior, characterized by a power-law probability distribution for their durations, while sleep stage durations exhibit exponential behavior. The co-existence of both scale-invariant and exponential processes generated by a single regulatory mechanism has not been observed in physiological systems until now. Such co-existence resembles the dynamical features of non-equilibrium systems exhibiting self-organized criticality (SOC). Our empirical analysis and modeling approaches based on modern concepts from statistical physics indicate that arousals are an integral part of sleep regulation and may be necessary to maintain and regulate healthy sleep by releasing accumulated excitations in the regulatory neuronal networks, following a SOC-type temporal organization.
An Optimization of Inventory Demand Forecasting in University Healthcare Centre
NASA Astrophysics Data System (ADS)
Bon, A. T.; Ng, T. K.
2017-01-01
Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.
Capozziello, S; Lambiase, G; Saridakis, E N
2017-01-01
We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.
Non-equilibrium STLS approach to transport properties of single impurity Anderson model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rezai, Raheleh, E-mail: R_Rezai@sbu.ac.ir; Ebrahimi, Farshad, E-mail: Ebrahimi@sbu.ac.ir
In this work, using the non-equilibrium Keldysh formalism, we study the effects of the electron–electron interaction and the electron-spin correlation on the non-equilibrium Kondo effect and the transport properties of the symmetric single impurity Anderson model (SIAM) at zero temperature by generalizing the self-consistent method of Singwi, Tosi, Land, and Sjolander (STLS) for a single-band tight-binding model with Hubbard type interaction to out of equilibrium steady-states. We at first determine in a self-consistent manner the non-equilibrium spin correlation function, the effective Hubbard interaction, and the double-occupancy at the impurity site. Then, using the non-equilibrium STLS spin polarization function in themore » non-equilibrium formalism of the iterative perturbation theory (IPT) of Yosida and Yamada, and Horvatic and Zlatic, we compute the spectral density, the current–voltage characteristics and the differential conductance as functions of the applied bias and the strength of on-site Hubbard interaction. We compare our spectral densities at zero bias with the results of numerical renormalization group (NRG) and depict the effects of the electron–electron interaction and electron-spin correlation at the impurity site on the aforementioned properties by comparing our numerical result with the order U{sup 2} IPT. Finally, we show that the obtained numerical results on the differential conductance have a quadratic universal scaling behavior and the resulting Kondo temperature shows an exponential behavior. -- Highlights: •We introduce for the first time the non-equilibrium method of STLS for Hubbard type models. •We determine the transport properties of SIAM using the non-equilibrium STLS method. •We compare our results with order-U2 IPT and NRG. •We show that non-equilibrium STLS, contrary to the GW and self-consistent RPA, produces the two Hubbard peaks in DOS. •We show that the method keeps the universal scaling behavior and correct exponential behavior of Kondo temperature.« less
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
Prony series spectra of structural relaxation in N-BK7 for finite element modeling.
Koontz, Erick; Blouin, Vincent; Wachtel, Peter; Musgraves, J David; Richardson, Kathleen
2012-12-20
Structural relaxation behavior of N-BK7 glass was characterized at temperatures 20 °C above and below T(12) for this glass, using a thermo mechanical analyzer (TMA). T(12) is a characteristic temperature corresponding to a viscosity of 10(12) Pa·s. The glass was subject to quick temperature down-jumps preceded and followed by long isothermal holds. The exponential-like decay of the sample height was recorded and fitted using a unique Prony series method. The result of his method was a plot of the fit parameters revealing the presence of four distinct peaks or distributions of relaxation times. The number of relaxation times decreased as final test temperature was increased. The relaxation times did not shift significantly with changing temperature; however, the Prony weight terms varied essentially linearly with temperature. It was also found that the structural relaxation behavior of the glass trended toward single exponential behavior at temperatures above the testing range. The result of the analysis was a temperature-dependent Prony series model that can be used in finite element modeling of glass behavior in processes such as precision glass molding (PGM).
Charge relaxation and dynamics in organic semiconductors
NASA Astrophysics Data System (ADS)
Kwok, H. L.
2006-08-01
Charge relaxation in dispersive materials is often described in terms of the stretched exponential function (Kohlrausch law). The process can be explained using a "hopping" model which in principle, also applies to charge transport such as current conduction. This work analyzed reported transient photoconductivity data on functionalized pentacene single crystals using a geometric hopping model developed by B. Sturman et al and extracted values (or range of values) on the materials parameters relevant to charge relaxation as well as charge transport. Using the correlated disorder model (CDM), we estimated values of the carrier mobility for the pentacene samples. From these results, we observed the following: i) the transport site density appeared to be of the same order of magnitude as the carrier density; ii) it was possible to extract lower bound values on the materials parameters linked to the transport process; and iii) by matching the simulated charge decay to the transient photoconductivity data, we were able to refine estimates on the materials parameters. The data also allowed us to simulate the stretched exponential decay. Our observations suggested that the stretching index and the carrier mobility were related. Physically, such interdependence would allow one to demarcate between localized molecular interactions and distant coulomb interactions.
Schomer, Paul; Mestre, Vincent; Fidell, Sanford; Berry, Bernard; Gjestland, Truls; Vallet, Michel; Reid, Timothy
2012-04-01
Fidell et al. [(2011), J. Acoust. Soc. Am. 130(2), 791-806] have shown (1) that the rate of growth of annoyance with noise exposure reported in attitudinal surveys of the annoyance of aircraft noise closely resembles the exponential rate of change of loudness with sound level, and (2) that the proportion of a community highly annoyed and the variability in annoyance prevalence rates in communities are well accounted for by a simple model with a single free parameter: a community tolerance level (abbreviated CTL, and represented symbolically in mathematical expressions as L(ct)), expressed in units of DNL. The current study applies the same modeling approach to predicting the prevalence of annoyance of road traffic and rail noise. The prevalence of noise-induced annoyance of all forms of transportation noise is well accounted for by a simple, loudness-like exponential function with community-specific offsets. The model fits all of the road traffic findings well, but the prevalence of annoyance due to rail noise is more accurately predicted separately for interviewing sites with and without high levels of vibration and/or rattle.
Ghosh, Anup; Chatterjee, Tanmay; Mandal, Prasun K
2012-06-25
An excitation and emission wavelength dependent non-exponential fluorescence decay behaviour of room temperature ionic liquids (RTILs) has been noted. Average fluorescence lifetimes have been found to vary by a factor of three or more. Red emitting dyes dissolved in RTILs are found to follow hitherto unobserved single exponential fluorescence decay behaviour.
NASA Astrophysics Data System (ADS)
Pang, G. M.; Nie, Z. Y.; Wang, A.; Singh, D.; Xie, W.; Jiang, W. B.; Chen, Y.; Singh, R. P.; Smidman, M.; Yuan, H. Q.
2018-06-01
The noncentrosymmetric superconductor Re6Zr has attracted much interest due to the observation of broken time-reversal symmetry in the superconducting state. Here we report an investigation of the superconducting gap structure of Re6Zr single crystals by measuring the magnetic penetration depth shift Δ λ (T ) and electronic specific heat Ce(T ) . Δ λ (T ) exhibits an exponential temperature dependence behavior for T ≪Tc , which indicates a fully open superconducting gap. Our analysis shows that a single gap s -wave model is sufficient to describe both the superfluid density ρs(T ) and Ce(T ) results, with a fitted gap magnitude larger than the weak coupling BCS value, providing evidence for fully gapped superconductivity in Re6Zr with moderate coupling.
NASA Astrophysics Data System (ADS)
Alinea, Allan L.; Kubota, Takahiro
2018-03-01
We perform adiabatic regularization of power spectrum in nonminimally coupled general single-field inflation with varying speed of sound. The subtraction is performed within the framework of earlier study by Urakawa and Starobinsky dealing with the canonical inflation. Inspired by Fakir and Unruh's model on nonminimally coupled chaotic inflation, we find upon imposing near scale-invariant condition, that the subtraction term exponentially decays with the number of e -folds. As in the result for the canonical inflation, the regularized power spectrum tends to the "bare" power spectrum as the Universe expands during (and even after) inflation. This work justifies the use of the "bare" power spectrum in standard calculation in the most general context of slow-roll single-field inflation involving nonminimal coupling and varying speed of sound.
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
Ellington, Benjamin M; Schmit, Brian D; Gourab, Krishnaj; Sieber-Blum, Maya; Hu, Yao F; Schmainda, Kathleen M
2009-01-01
Diffusion weighted magnetic resonance imaging (DWI) is a powerful tool for evaluation of microstructural anomalies in numerous central nervous system pathologies. Diffusion tensor imaging (DTI) allows for the magnitude and direction of water self diffusion to be estimated by sampling the apparent diffusion coefficient (ADC) in various directions. Clinical DWI and DTI performed at a single level of diffusion weighting, however, does not allow for multiple diffusion compartments to be elicited. Furthermore, assumptions made regarding the precise number of diffusion compartments intrinsic to the tissue of interest have resulted in a lack of consensus between investigations. To overcome these challenges, a stretched-exponential model of diffusion was applied to examine the diffusion coefficient and "heterogeneity index" within highly compartmentalized brain tumors. The purpose of the current study is to expand on the stretched-exponential model of diffusion to include directionality of both diffusion heterogeneity and apparent diffusion coefficient. This study develops the mathematics of this new technique along with an initial application in quantifying spinal cord regeneration following acute injection of epidermal neural crest stem cell (EPI-NCSC) grafts.
Shiau, LieJune; Schwalger, Tilo; Lindner, Benjamin
2015-06-01
We study the spike statistics of an adaptive exponential integrate-and-fire neuron stimulated by white Gaussian current noise. We derive analytical approximations for the coefficient of variation and the serial correlation coefficient of the interspike interval assuming that the neuron operates in the mean-driven tonic firing regime and that the stochastic input is weak. Our result for the serial correlation coefficient has the form of a geometric sequence and is confirmed by the comparison to numerical simulations. The theory predicts various patterns of interval correlations (positive or negative at lag one, monotonically decreasing or oscillating) depending on the strength of the spike-triggered and subthreshold components of the adaptation current. In particular, for pure subthreshold adaptation we find strong positive ISI correlations that are usually ascribed to positive correlations in the input current. Our results i) provide an alternative explanation for interspike-interval correlations observed in vivo, ii) may be useful in fitting point neuron models to experimental data, and iii) may be instrumental in exploring the role of adaptation currents for signal detection and signal transmission in single neurons.
Verifiable fault tolerance in measurement-based quantum computation
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Hayashi, Masahito
2017-09-01
Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.
Analytical model of coincidence resolving time in TOF-PET
NASA Astrophysics Data System (ADS)
Wieczorek, H.; Thon, A.; Dey, T.; Khanin, V.; Rodnyi, P.
2016-06-01
The coincidence resolving time (CRT) of scintillation detectors is the parameter determining noise reduction in time-of-flight PET. We derive an analytical CRT model based on the statistical distribution of photons for two different prototype scintillators. For the first one, characterized by single exponential decay, CRT is proportional to the decay time and inversely proportional to the number of photons, with a square root dependence on the trigger level. For the second scintillator prototype, characterized by exponential rise and decay, CRT is proportional to the square root of the product of rise time and decay time divided by the doubled number of photons, and it is nearly independent of the trigger level. This theory is verified by measurements of scintillation time constants, light yield and CRT on scintillator sticks. Trapping effects are taken into account by defining an effective decay time. We show that in terms of signal-to-noise ratio, CRT is as important as patient dose, imaging time or PET system sensitivity. The noise reduction effect of better timing resolution is verified and visualized by Monte Carlo simulation of a NEMA image quality phantom.
Disordered Kitaev chains with long-range pairing.
Cai, Xiaoming
2017-03-22
We study the competition of disorder and superconductivity for a generalized Kitaev model in incommensurate potentials. The generalized Kitaev model describes one dimensional spinless fermions with long-range p-wave superconducting pairing, which decays with distance l as a power law ∼[Formula: see text]. We focus on the transition from the topological superconducting phase to the topologically trivial Anderson localized phase, and effects of the exponent α on this phase transition. In the topological superconducting phase, for a system under open boundary condition the amplitude of zero-mode Majorana fermion has a hybrid exponential-algebraic decay as the distance increases from the edge. In the Anderson localized phase, some single-particle states remain critical for very strong disorders and the number of critical states increases as α decreases. In addition, except for critical disorders, the correlation function always has an exponential decay at the short range and an algebraic decay at the long range. Phase transition points are also numerically determined and the topological phase transition happens earlier at a smaller disorder strength for a system with smaller α.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
Design and Analysis of Scheduling Policies for Real-Time Computer Systems
1992-01-01
C. M. Krishna, "The Impact of Workload on the Reliability of Real-Time Processor Triads," to appear in Micro . Rel. [17] J.F. Kurose, "Performance... Processor Triads", to appear in Micro . Rel. "* J.F. Kurose. "Performance Analysis of Minimum Laxity Scheduling in Discrete Time Queue- ing Systems", to...exponentially distributed service times and deadlines. A similar model was developed for the ED policy for a single processor system under identical
Coarse graining Escherichia coli chemotaxis: from multi-flagella propulsion to logarithmic sensing.
Curk, Tine; Matthäus, Franziska; Brill-Karniely, Yifat; Dobnikar, Jure
2012-01-01
Various sensing mechanisms in nature can be described by the Weber-Fechner law stating that the response to varying stimuli is proportional to their relative rather than absolute changes. The chemotaxis of bacteria Escherichia coli is an example where such logarithmic sensing enables sensitivity over large range of concentrations. It has recently been experimentally demonstrated that under certain conditions E. coli indeed respond to relative gradients of ligands. We use numerical simulations of bacteria in food gradients to investigate the limits of validity of the logarithmic behavior. We model the chemotactic signaling pathway reactions, couple them to a multi-flagella model for propelling and take the effects of rotational diffusion into account to accurately reproduce the experimental observations of single cell swimming. Using this simulation scheme we analyze the type of response of bacteria subject to exponential ligand profiles and identify the regimes of absolute gradient sensing, relative gradient sensing, and a rotational diffusion dominated regime. We explore dependance of the swimming speed, average run time and the clockwise (CW) bias on ligand variation and derive a small set of relations that define a coarse grained model for bacterial chemotaxis. Simulations based on this coarse grained model compare well with microfluidic experiments on E. coli diffusion in linear and exponential gradients of aspartate.
Garrigues, Alvar R.; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R.; Thompon, Damien; del Barco, Enrique; Nijhuis, Christian A.
2016-01-01
We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction. PMID:27216489
NASA Astrophysics Data System (ADS)
Allen, Linda J. S.
2016-09-01
Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,
Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.
Okubo, T; Shibata, H; Takishima, T
1983-07-01
By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.
NASA Astrophysics Data System (ADS)
Morishita, Tetsuya
2012-07-01
We report a first-principles molecular-dynamics study of the relaxation dynamics in liquid silicon (l-Si) over a wide temperature range (1000-2200 K). We find that the intermediate scattering function for l-Si exhibits a compressed exponential decay above 1200 K including the supercooled regime, which is in stark contrast to that for normal "dense" liquids which typically show stretched exponential decay in the supercooled regime. The coexistence of particles having ballistic-like motion and those having diffusive-like motion is demonstrated, which accounts for the compressed exponential decay in l-Si. An attempt to elucidate the crossover from the ballistic to the diffusive regime in the "time-dependent" diffusion coefficient is made and the temperature-independent universal feature of the crossover is disclosed.
A Simulation To Model Exponential Growth.
ERIC Educational Resources Information Center
Appelbaum, Elizabeth Berman
2000-01-01
Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)
Strong feedback limit of the Goodwin circadian oscillator
NASA Astrophysics Data System (ADS)
Woller, Aurore; Gonze, Didier; Erneux, Thomas
2013-03-01
The three-variable Goodwin model constitutes a prototypical oscillator based on a negative feedback loop. It was used as a minimal model for circadian oscillations. Other core models for circadian clocks are variants of the Goodwin model. The Goodwin oscillator also appears in many studies of coupled oscillator networks because of its relative simplicity compared to other biophysical models involving a large number of variables and parameters. Because the synchronization properties of Goodwin oscillators still remain difficult to explore mathematically, further simplifications of the Goodwin model have been sought. In this paper, we investigate the strong negative feedback limit of Goodwin equations by using asymptotic techniques. We find that Goodwin oscillations approach a sequence of decaying exponentials that can be described in terms of a single-variable leaky integrated-and-fire model.
Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks
NASA Astrophysics Data System (ADS)
Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang
2016-01-01
The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.
NASA Astrophysics Data System (ADS)
Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In
2017-08-01
In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.
Bubble number saturation curve and asymptotics of hypobaric and hyperbaric exposures.
Wienke, B R
1991-12-01
Within bubble number limits of the varying permeability and reduced gradient bubble models, it is shown that a linear form of the saturation curve for hyperbaric exposures and a nearly constant decompression ratio for hypobaric exposures are simultaneously recovered from the phase volume constraint. Both limits are maintained within a single bubble number saturation curve. A bubble term, varying exponentially with inverse pressure, provides closure. Two constants describe the saturation curve, both linked to seed numbers. Limits of other decompression models are also discussed and contrasted for completeness. It is suggested that the bubble number saturation curve thus provides a consistent link between hypobaric and hyperbaric data, a link not established by earlier decompression models.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
Evaluation of leaf litter leaching kinetics through commonly-used mathematical models
NASA Astrophysics Data System (ADS)
Montoya, J. V.; Bastianoni, A.; Mendez, C.; Paolini, J.
2012-04-01
Leaching is defined as the abiotic process by which soluble compounds of the litter are released into the water. Most studies dealing with leaf litter breakdown and leaching kinetics apply the single exponential decay model since it corresponds well with the understanding of the biology of decomposition. However, during leaching important mass losses occur and mathematical models often fail in describing this process adequately. During the initial hours of leaching leaf litter experience high decay rates which are not properly modelled. Adjusting leaching losses to mathematical models has not been investigated thoroughly and the use of models assuming constant decay rates leads to inappropriate assessments of leaching kinetics. We aim to describe, assess, and compare different leaching kinetics models fitted to leaf litter mass losses from six Neotropical riparian forest species. Leaf litter from each species was collected in the lower reaches of San Miguel stream in Northern Venezuela. Air-dried leaves from each species were incubated in 250 ml of water in the dark at room temperature. At 1h, 6h, 1d, 2d, 4d, 8d and 15d, three jars were removed from the assay in a no-replacement experimental design. At each time leaves from each jar were removed and oven-dried. Afterwards, dried up leaves were weighed and remaining dry mass was determined and expressed as ash-free dry mass. Mass losses of leaf litter showed steep declines for the first two days followed by a steady decrease in mass loss. Data was fitted to three different models: single-exponential, power and rational. Our results showed that the mass loss predicted with the single-exponential model did not reflect the real data at any stage of the leaching process. The power model showed a better adjustment, but fails predicting successfully the behavior during leaching's early stages. To evaluate the performance of our models we used three criteria: Adj-R2, Akaike's Information Criteria (AIC), and residual distribution. Higher Adj-R2 were obtained for the power and the rational-type models. However, when AIC and residuals distribution were used, the only model that could satisfactory predict the behavior of our dataset was the rational-type. Even if the Adj-R2 was higher for some species when using the power model compared to the rational-type; our results showed that this criterion alone cannot demonstrate the predicting performance of any model. Usually Adj-R2 is used when assessing the goodness of fit for any mathematical model disregarding the fact that a good Adj-R2 could be obtained even when statistical assumptions required for the validity of the model are not satisfied. Our results showed that sampling at the initial stages of leaching is necessary to adequately describe this process. We also provided evidence that using traditional mathematical models is not the best option to evaluate leaching kinetics because of its mathematical inability to properly describe the abrupt changes that occur during the early stages of leaching. We also found useful applying different criteria to evaluate the goodness-of-fit and performance of any model considered taking into account both statistical and biological meaning of the results.
NASA Astrophysics Data System (ADS)
Siva Rama Krishna Reddy, K.; Swapna, K.; Mahamuda, Sk.; Venkateswarlu, M.; Srinivas Prasad, M. V. V. K.; Rao, A. S.; Prakash, G. Vijaya
2018-05-01
Sm3+ ions doped Alkaline-Earth Boro Tellurite (AEBT) glasses were prepared by using conventional melt quenching technique and characterized using the spectroscopic techniques such as FT-IR, optical absorption, emission and decay spectral measurements to understand their utility in optoelectronic devices. From absorption spectra, the bonding parameters, nephelauxetic ratios were determined to know the nature of bonding between Sm3+ ions and its surrounding ligands. From the measured oscillator strengths, the Judd-Ofelt (J-O) intensity parameters were evaluated and in turn used to estimate various radiative parameters for the fluorescent levels of Sm3+ ions in AEBT glasses. The PL spectra of Sm3+ ions exhibit three emission bands corresponding to the transitions 4G5/2 → 6H5/2, 6H7/2 and 6H9/2 in the visible region for which the emission cross-sections and branching ratios were evaluated. The decay spectral profiles measured for 4G5/2 → 6H7/2 transition showed single exponential for lower concentration and non-exponential for higher concentration of doped rare earth ion in the as prepared glasses. Conversion of decay spectral profiles from single to non-exponential have been analyzed using Inokuti-Hirayama (I-H) model to understand the energy transfer mechanism involved in the decay process. CIE Chromaticity coordinates were measured using emission spectral data to identify the exact region of emission from the as-prepared glasses. From the evaluated radiative parameters, emission cross-sections and quantum efficiencies, it was observed that AEBT glass with 1 mol% of Sm3+ ions is more suitable for designing optoelectronic devices.
Self-charging of identical grains in the absence of an external field.
Yoshimatsu, R; Araújo, N A M; Wurm, G; Herrmann, H J; Shinbrot, T
2017-01-06
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Self-charging of identical grains in the absence of an external field
NASA Astrophysics Data System (ADS)
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Something from nothing: self-charging of identical grains
NASA Astrophysics Data System (ADS)
Shinbrot, Troy; Yoshimatsu, Ryuta; Nuno Araujo, Nuno; Wurm, Gerhard; Herrmann, Hans
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. I acknowledge support from NSF/DMR, award 1404792.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
State of charge modeling of lithium-ion batteries using dual exponential functions
NASA Astrophysics Data System (ADS)
Kuo, Ting-Jung; Lee, Kung-Yen; Huang, Chien-Kang; Chen, Jau-Horng; Chiu, Wei-Li; Huang, Chih-Fang; Wu, Shuen-De
2016-05-01
A mathematical model is developed by fitting the discharging curve of LiFePO4 batteries and used to investigate the relationship between the state of charge and the closed-circuit voltage. The proposed mathematical model consists of dual exponential terms and a constant term which can fit the characteristics of dual equivalent RC circuits closely, representing a LiFePO4 battery. One exponential term presents the stable discharging behavior and the other one presents the unstable discharging behavior and the constant term presents the cut-off voltage.
Self-charging of identical grains in the absence of an external field
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. PMID:28059124
Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur
2017-08-01
Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.
NASA Astrophysics Data System (ADS)
Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur
2017-08-01
Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.
NASA Technical Reports Server (NTRS)
Klemin, Alexander
1937-01-01
An airplane in steady rectilinear flight was assumed to experience an initial disturbance in rolling or yawing velocity. The equations of motion were solved to see if it was possible to hasten recovery of a stable airplane or to secure recovery of an unstable airplane by the application of a single lateral control following an exponential law. The sample computations indicate that, for initial disturbances complex in character, it would be difficult to secure correlation with any type of exponential control. The possibility is visualized that the two-control operation may seriously impair the ability to hasten recovery or counteract instability.
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
DOE Office of Scientific and Technical Information (OSTI.GOV)
Šefl, Martin, E-mail: martin.sefl@gmail.com; Kyriakou, Ioanna; Emfietzoglou, Dimitris, E-mail: demfietz@cc.uoi.gr
Purpose: To study theoretically the impact on cell survival of the radionuclide uptake rate inside tumor cells for a single administration of a radiopharmaceutical. Methods: The instantaneous-uptake model of O’Donoghue [“The impact of tumor cell proliferation in radioimmunotherapy,” Cancer 73, 974–980 (1994)] for a proliferating cell population irradiated by an exponentially decreasing dose-rate is here extended to allow for the monoexponential uptake of the radiopharmaceutical by the targeted cells. The time derivative of the survival curve is studied in detail deducing an expression for the minimum of the surviving fraction and the biologically effective dose (BED). Results: Surviving fractions aremore » calculated over a parameter range that is clinically relevant and broad enough to establish general trends. Specifically, results are presented for the therapy radionuclides Y-90, I-131, and P-32, assuming uptake half-times 1–24 h, extrapolated initial dose-rates 0.5–1 Gy h{sup −1}, and a biological clearance half-life of seven days. Representative radiobiological parameters for radiosensitive and rapidly proliferating tumor cells are used, with cell doubling time equal to 2 days and α-coefficient equal to 0.3 and 0.5 Gy{sup −1}. It is shown that neglecting the uptake phase of the radiopharmaceutical (i.e., assuming instantaneous-uptake) results in a sizeable over-estimation of cell-kill (i.e., under-estimation of cell survival) even for uptake half-times of only a few hours. The differences between the exponential-uptake model and the instantaneous-uptake model become larger for high peak dose-rates, slow uptakes, and (slightly) for long-lived radionuclides. Moreover, the sensitivity of the survival curve on the uptake model was found to be higher for the tumor cells with the larger α-coefficient. Conclusions: The exponential-uptake rate of the radiopharmaceutical inside targeted cells appears to have a considerable effect on the survival of a proliferating cell population and might need to be considered in radiobiological models of tumor cell-kill in radionuclide therapy.« less
A Simulation of the ECSS Help Desk with the Erlang a Model
2011-03-01
a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
Andreev bound states. Some quasiclassical reflections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Y., E-mail: yiriolin@illinois.edu; Leggett, A. J.
2014-12-15
We discuss a very simple and essentially exactly solvable model problem which illustrates some nice features of Andreev bound states, namely, the trapping of a single Bogoliubov quasiparticle in a neutral s-wave BCS superfluid by a wide and shallow Zeeman trap. In the quasiclassical limit, the ground state is a doublet with a splitting which is proportional to the exponentially small amplitude for “normal” reflection by the edges of the trap. We comment briefly on a prima facie paradox concerning the continuity equation and conjecture a resolution to it.
Ground-state proton decay of 69Br and implications for the rp -process 68Se waiting-point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, Andrew M; Shapira, Dan; Lynch, William
2011-01-01
The first direct measurement of the proton separation energy, S p , for the proton-unbound nucleus 69Br is reported. Of interest is the exponential dependence of the 2 p-capture rate on S p which can bypass the 68Se waiting-point in the astrophysical rp process. An analysis of the observed proton decay spectrum is given in terms of the 69Se mirror nucleus and the influence of S p is explored within the context of a single-zone X-ray burst model.
Andreev bound states. Some quasiclassical reflections
NASA Astrophysics Data System (ADS)
Lin, Y.; Leggett, A. J.
2014-12-01
We discuss a very simple and essentially exactly solvable model problem which illustrates some nice features of Andreev bound states, namely, the trapping of a single Bogoliubov quasiparticle in a neutral s-wave BCS superfluid by a wide and shallow Zeeman trap. In the quasiclassical limit, the ground state is a doublet with a splitting which is proportional to the exponentially small amplitude for "normal" reflection by the edges of the trap. We comment briefly on a prima facie paradox concerning the continuity equation and conjecture a resolution to it.
Optical spectroscopy of BaY2F8:Dy3+
NASA Astrophysics Data System (ADS)
Parisi, Daniela; Toncelli, Alessandra; Tonelli, Mauro; Cavalli, Enrico; Bovero, Enrico; Belletti, Alessandro
2005-05-01
The optical spectra of the BaY2F8:Dy3+ laser crystal have been investigated in the 5000-30 000 cm-1 range. The Judd-Ofelt parametrization scheme has been applied to the analysis of the room temperature absorption spectra. The calculated radiative lifetime of the 4F9/2 state is 1.48 ms. Decay curves of the visible emission have been measured as a function of the temperature for two different Dy3+ concentrations (0.5 and 4.4%). In the case of the diluted crystal the emission profiles are single exponential with decay times consistent with the radiative lifetime. The decay curves of the concentrated crystal are not exponential and they obey the Inokuti-Hirayama model for energy transfer for an electric dipole-dipole interaction in the absence of diffusion among the donors. The emission cross section at 575 nm has been estimated using the integral β-τ method in order to assess the potentialities of this compound as a solid state laser material in the yellow region.
Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut
2017-01-01
We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .
Liu, Dong-jun; Li, Li
2015-01-01
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332
Liu, Dong-jun; Li, Li
2015-06-23
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.
Mathematical Modeling of Extinction of Inhomogeneous Populations
Karev, G.P.; Kareva, I.
2016-01-01
Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117
Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.
ERIC Educational Resources Information Center
Mandell, Marvin B.; Bretschneider, Stuart I.
1984-01-01
The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)
Lawless, Conor; Jurk, Diana; Gillespie, Colin S; Shanley, Daryl; Saretzki, Gabriele; von Zglinicki, Thomas; Passos, João F
2012-01-01
Increases in cellular Reactive Oxygen Species (ROS) concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells.One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage). However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS). We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence.
Lawless, Conor; Jurk, Diana; Gillespie, Colin S.; Shanley, Daryl; Saretzki, Gabriele; von Zglinicki, Thomas; Passos, João F.
2012-01-01
Increases in cellular Reactive Oxygen Species (ROS) concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells. One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage). However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS). We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence. PMID:22359661
USDA-ARS?s Scientific Manuscript database
A new mechanistic growth model was developed to describe microbial growth under isothermal conditions. The new mathematical model was derived from the basic observation of bacterial growth that may include lag, exponential, and stationary phases. With this model, the lag phase duration and exponen...
Predicting High-Power Performance in Professional Cyclists.
Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K
2017-03-01
To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.
Teaching the Verhulst Model: A Teaching Experiment in Covariational Reasoning and Exponential Growth
ERIC Educational Resources Information Center
Castillo-Garsow, Carlos
2010-01-01
Both Thompson and the duo of Confrey and Smith describe how students might be taught to build "ways of thinking" about exponential behavior by coordinating the covariation of two changing quantities, however, these authors build exponential behavior from different meanings of covariation. Confrey and Smith advocate beginning with discrete additive…
Review of "Going Exponential: Growing the Charter School Sector's Best"
ERIC Educational Resources Information Center
Garcia, David
2011-01-01
This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…
Fractional compartmental models and multi-term Mittag-Leffler response functions.
Verotta, Davide
2010-04-01
Systems of fractional differential equations (SFDE) have been increasingly used to represent physical and control system, and have been recently proposed for use in pharmacokinetics (PK) by (J Pharmacokinet Pharmacodyn 36:165-178, 2009) and (J Phamacokinet Pharmacodyn, 2010). We contribute to the development of a theory for the use of SFDE in PK by, first, further clarifying the nature of systems of FDE, and in particular point out the distinction and properties of commensurate versus non-commensurate ones. The second purpose is to show that for both types of systems, relatively simple response functions can be derived which satisfy the requirements to represent single-input/single-output PK experiments. The response functions are composed of sums of single- (for commensurate) or two-parameters (for non-commensurate) Mittag-Leffler functions, and establish a direct correspondence with the familiar sums of exponentials used in PK.
McKellar, Robin C
2008-01-15
Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
NASA Astrophysics Data System (ADS)
Lengline, O.; Marsan, D.; Got, J.; Pinel, V.
2007-12-01
The evolution of the seismicity at three basaltic volcanoes (Kilauea, Mauna-Loa and Piton de la Fournaise) is analysed during phases of magma accumulation. We show that the VT seismicity during these time-periods is characterized by an exponential increase at long-time scale (years). Such an exponential acceleration can be explained by a model of seismicity forced by the replenishment of a magmatic reservoir. The increase in stress in the edifice caused by this replenishment is modeled. This stress history leads to a cumulative number of damage, ie VT earthquakes, following the same exponential increase as found for seismicity. A long-term seismicity precursor is thus detected at basaltic volcanoes. Although this precursory signal is not able to predict the onset times of futures eruptions (as no diverging point is present in the model), it may help mitigating volcanic hazards.
Broadband Spectral Modeling of the Extreme Gigahertz-peaked Spectrum Radio Source PKS B0008-421
NASA Astrophysics Data System (ADS)
Callingham, J. R.; Gaensler, B. M.; Ekers, R. D.; Tingay, S. J.; Wayth, R. B.; Morgan, J.; Bernardi, G.; Bell, M. E.; Bhat, R.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hindson, L.; Hurley-Walker, N.; Jacobs, D. C.; Johnston-Hollitt, M.; Kaplan, D. L.; Kudrayvtseva, N.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Srivani, K. S.; Subrahmanyan, R.; Udaya Shankar, N.; Webster, R. L.; Williams, A.; Williams, C. L.
2015-08-01
We present broadband observations and spectral modeling of PKS B0008-421 and identify it as an extreme gigahertz-peaked spectrum (GPS) source. PKS B0008-421 is characterized by the steepest known spectral slope below the turnover, close to the theoretical limit of synchrotron self-absorption, and the smallest known spectral width of any GPS source. Spectral coverage of the source spans from 0.118 to 22 GHz, which includes data from the Murchison Widefield Array and the wide bandpass receivers on the Australia Telescope Compact Array. We have implemented a Bayesian inference model fitting routine to fit the data with internal free-free absorption (FFA), single- and double-component FFA in an external homogeneous medium, FFA in an external inhomogeneous medium, or single- and double-component synchrotron self-absorption models, all with and without a high-frequency exponential break. We find that without the inclusion of a high-frequency break these models cannot accurately fit the data, with significant deviations above and below the peak in the radio spectrum. The addition of a high-frequency break provides acceptable spectral fits for the inhomogeneous FFA and double-component synchrotron self-absorption models, with the inhomogeneous FFA model statistically favored. The requirement of a high-frequency spectral break implies that the source has ceased injecting fresh particles. Additional support for the inhomogeneous FFA model as being responsible for the turnover in the spectrum is given by the consistency between the physical parameters derived from the model fit and the implications of the exponential spectral break, such as the necessity of the source being surrounded by a dense ambient medium to maintain the peak frequency near the gigahertz region. This implies that PKS B0008-421 should display an internal H i column density greater than 1020 cm-2. The discovery of PKS B0008-421 suggests that the next generation of low radio frequency surveys could reveal a large population of GPS sources that have ceased activity, and that a portion of the ultra-steep-spectrum source population could be composed of these GPS sources in a relic phase.
Vadeby, Anna; Forsman, Åsa
2017-06-01
This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Volkán-Kacsó, Sándor; Marcus, Rudolph A.
2015-01-01
A theoretical model of elastically coupled reactions is proposed for single molecule imaging and rotor manipulation experiments on F1-ATPase. Stalling experiments are considered in which rates of individual ligand binding, ligand release, and chemical reaction steps have an exponential dependence on rotor angle. These data are treated in terms of the effect of thermodynamic driving forces on reaction rates, and lead to equations relating rate constants and free energies to the stalling angle. These relations, in turn, are modeled using a formalism originally developed to treat electron and other transfer reactions. During stalling the free energy profile of the enzymatic steps is altered by a work term due to elastic structural twisting. Using biochemical and single molecule data, the dependence of the rate constant and equilibrium constant on the stall angle, as well as the Børnsted slope are predicted and compared with experiment. Reasonable agreement is found with stalling experiments for ATP and GTP binding. The model can be applied to other torque-generating steps of reversible ligand binding, such as ADP and Pi release, when sufficient data become available. PMID:26483483
Burton, Kevin; Simmons, Robert M; Sleep, John; Smith, David A
2006-01-01
Redevelopment of isometric force following shortening of skeletal muscle is thought to result from a redistribution of cross-bridge states. We varied the initial force and cross-bridge distribution by applying various length-change protocols to active skinned single fibres from rabbit psoas muscle, and observed the effect on the slowest phase of recovery (‘late recovery’) that follows transient changes. In response to step releases that reduced force to near zero (∼8 nm (half sarcomere)−1) or prolonged shortening at high velocity, late recovery was well described by two exponentials of approximately equal amplitude and rate constants of ∼2 s−1 and ∼9 s−1 at 5°C. When a large restretch was applied at the end of rapid shortening, recovery was accelerated by (1) the introduction of a slow falling component that truncated the rise in force, and (2) a relative increase in the contribution of the fast exponential component. The rate of the slow fall was similar to that observed after a small isometric step stretch, with a rate of 0.4–0.8 s−1, and its effects could be reversed by reducing force to near zero immediately after the stretch. Force at the start of late recovery was varied in a series of shortening steps or ramps in order to probe the effect of cross-bridge strain on force redevelopment. The rate constants of the two components fell by 40–50% as initial force was raised to 75–80% of steady isometric force. As initial force increased, the relative contribution of the fast component decreased, and this was associated with a length constant of about 2 nm. The results are consistent with a two-state strain-dependent cross-bridge model. In the model there is a continuous distribution of recovery rate constants, but two-exponential fits show that the fast component results from cross-bridges initially at moderate positive strain and the slow component from cross-bridges at high positive strain. PMID:16497718
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
Statistics of excitations in the electron glass model
NASA Astrophysics Data System (ADS)
Palassini, Matteo
2011-03-01
We study the statistics of elementary excitations in the classical electron glass model of localized electrons interacting via the unscreened Coulomb interaction in the presence of disorder. We reconsider the long-standing puzzle of the exponential suppression of the single-particle density of states near the Fermi level, by measuring accurately the density of states of charged and electron-hole pair excitations via finite temperature Monte Carlo simulation and zero-temperature relaxation. We also investigate the statistics of large charge rearrangements after a perturbation of the system, which may shed some light on the slow relaxation and glassy phenomena recently observed in a variety of Anderson insulators. In collaboration with Martin Goethe.
Coagulation-Fragmentation Model for Animal Group-Size Statistics
NASA Astrophysics Data System (ADS)
Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.
2017-04-01
We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.
A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks
Schaffer, Evan S.; Ostojic, Srdjan; Abbott, L. F.
2013-01-01
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons. PMID:24204236
Cardiac sodium channel Markov model with temperature dependence and recovery from inactivation.
Irvine, L A; Jafri, M S; Winslow, R L
1999-01-01
A Markov model of the cardiac sodium channel is presented. The model is similar to the CA1 hippocampal neuron sodium channel model developed by Kuo and Bean (1994. Neuron. 12:819-829) with the following modifications: 1) an additional open state is added; 2) open-inactivated transitions are made voltage-dependent; and 3) channel rate constants are exponential functions of enthalpy, entropy, and voltage and have explicit temperature dependence. Model parameters are determined using a simulated annealing algorithm to minimize the error between model responses and various experimental data sets. The model reproduces a wide range of experimental data including ionic currents, gating currents, tail currents, steady-state inactivation, recovery from inactivation, and open time distributions over a temperature range of 10 degrees C to 25 degrees C. The model also predicts measures of single channel activity such as first latency, probability of a null sweep, and probability of reopening. PMID:10096885
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
Comparison of kinetic model for biogas production from corn cob
NASA Astrophysics Data System (ADS)
Shitophyta, L. M.; Maryudi
2018-04-01
Energy demand increases every day, while the energy source especially fossil energy depletes increasingly. One of the solutions to overcome the energy depletion is to provide renewable energies such as biogas. Biogas can be generated by corn cob and food waste. In this study, biogas production was carried out by solid-state anaerobic digestion. The steps of biogas production were the preparation of feedstock, the solid-state anaerobic digestion, and the measurement of biogas volume. This study was conducted on TS content of 20%, 22%, and 24%. The aim of this research was to compare kinetic models of biogas production from corn cob and food waste as a co-digestion using the linear, exponential equation, and first-kinetic models. The result showed that the exponential equation had a better correlation than the linear equation on the ascending graph of biogas production. On the contrary, the linear equation had a better correlation than the exponential equation on the descending graph of biogas production. The correlation values on the first-kinetic model had the smallest value compared to the linear and exponential models.
NASA Astrophysics Data System (ADS)
Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye
2016-10-01
With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
Luxton, Gary; Keall, Paul J; King, Christopher R
2008-01-07
To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement
Gustman, Alan L.; Steinmeier, Thomas L.
2012-01-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.
Gustman, Alan L; Steinmeier, Thomas L
2012-06-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.
NASA Astrophysics Data System (ADS)
Ernazarov, K. K.
2017-12-01
We consider a (m + 2)-dimensional Einstein-Gauss-Bonnet (EGB) model with the cosmological Λ-term. We restrict the metrics to be diagonal ones and find for certain Λ = Λ(m) class of cosmological solutions with non-exponential time dependence of two scale factors of dimensions m > 2 and 1. Any solution from this class describes an accelerated expansion of m-dimensional subspace and tends asymptotically to isotropic solution with exponential dependence of scale factors.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Abusam, A; Keesman, K J
2009-01-01
The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Exponential quantum spreading in a class of kicked rotor systems near high-order resonances
NASA Astrophysics Data System (ADS)
Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin
2013-11-01
Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.
A decades-long fast-rise-exponential-decay flare in low-luminosity AGN NGC 7213
NASA Astrophysics Data System (ADS)
Yan, Zhen; Xie, Fu-Guo
2018-03-01
We analysed the four-decades-long X-ray light curve of the low-luminosity active galactic nucleus (LLAGN) NGC 7213 and discovered a fast-rise-exponential-decay (FRED) pattern, i.e. the X-ray luminosity increased by a factor of ≈4 within 200 d, and then decreased exponentially with an e-folding time ≈8116 d (≈22.2 yr). For the theoretical understanding of the observations, we examined three variability models proposed in the literature: the thermal-viscous disc instability model, the radiation pressure instability model, and the TDE model. We find that a delayed tidal disruption of a main-sequence star is most favourable; either the thermal-viscous disc instability model or radiation pressure instability model fails to explain some key properties observed, thus we argue them unlikely.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
Vautin, R G; Berkley, M A
1977-09-01
1. The activity of single cortical cells in area 17 of anesthetized and unanesthetized cats was recorded in response to prolonged stimulation with moving stimuli. 2. Under the appropriate conditions, all cells observed showed a progressive response decrement during the stimulation period, regardless of cell classification, i.e., simple, complex, or hypercomplex. 3. The observed response decrement was shown to be largely cortical in origin and could be adequately described with an exponential function of the form R = Rf +(R1-Rf)e-t/T. Time constants derived from such calculations yielded values ranging from 1.92 to 12.45 s under conditions of optimal-stimulation. 4. Most cells showed poststimulation effects, usually a brief period of reduced responsiveness that recovered exponentially. Recovery was essentially complete in about 5-35 s. 5. The degree to which stimuli were effective at inducing response was shown to have significant effects on the magnitude of the response decrement. 6. Several cells showed neural patterns of response and recovery that suggested the operation of intracortical inhibitory mechanisms. 7. A simple two-process model that adequately describes the behavior of all the studied cells is presented. 8. Because the properties of the cells studied correlate well with human psychophysical measures of contour and movement adaptation and recovery, a causal relationship to similar neural mechanisms in humans is suggested.
CMB constraints on β-exponential inflationary models
NASA Astrophysics Data System (ADS)
Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.
2018-03-01
We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.
Fattorini, Simone
2006-08-01
Any method of identifying hotspots should take into account the effect of area on species richness. I examined the importance of the species-area relationship in determining tenebrionid (Coleoptera: Tenebrionidae) hotspots on the Aegean Islands (Greece). Thirty-two islands and 170 taxa (species and subspecies) were included in this study. I tested several species-area relationship models with linear and nonlinear regressions, including power exponential, negative exponential, logistic, Gompertz, Weibull, Lomolino, and He-Legendre functions. Islands with positive residuals were identified as hotspots. I also analyzed the values of the C parameter of the power function and the simple species-area ratios. Species richness was significantly correlated with island area for all models. The power function model was the most convenient one. Most functions, however identified certain islands as hotspots. The importance of endemics in insular biotas should be evaluated carefully because they are of high conservation concern. The simple use of the species-area relationship can be problematic when areas with no endemics are included. Therefore the importance of endemics should be evaluated according to different methods, such as percentages, to take into account different levels of endemism and different kinds of "endemics" (e.g., endemic to single islands vs. endemic to the archipelago). Because the species-area relationship is a key pattern in ecology, my findings can be applied at broader scales.
Optical and luminescence properties of Dy3+ ions in phosphate based glasses
NASA Astrophysics Data System (ADS)
Rasool, Sk. Nayab; Rama Moorthy, L.; Jayasankar, C. K.
2013-08-01
Phosphate glasses with compositions of 44P2O5 + 17K2O + 9Al2O3 + (30 - x)CaF2 + xDy2O3 (x = 0.05, 0.1, 0.5, 1.0, 2.0, 3.0 and 4.0 mol %) were prepared and characterized by X-ray diffraction (XRD), differential thermal analysis (DTA), Fourier transform infrared (FTIR), optical absorption, emission and decay measurements. The observed absorption bands were analyzed by using the free-ion Hamiltonian (HFI) model. The Judd-Ofelt (JO) analysis has been performed and the intensity parameters (Ωλ, λ = 2, 4, 6) were evaluated in order to predict the radiative properties of the excited states. From the emission spectra, the effective band widths (Δλeff), stimulated emission cross-sections (σ(λp)), yellow to blue (Y/B) intensity ratios and chromaticity color coordinates (x, y) have been determined. The fluorescence decays from the 4F9/2 level of Dy3+ ions were measured by monitoring the intense 4F9/2 → 6H15/2 transition (486 nm). The experimental lifetimes (τexp) are found to decrease with the increase of Dy3+ ions concentration due to the quenching process. The decay curves are perfectly single exponential at lower concentrations and gradually changes to non-exponential for higher concentrations. The non-exponential decay curves are well fitted to the Inokuti-Hirayama (IH) model for S = 6, which indicates that the energy transfer between the donor and acceptor is of dipole-dipole type. The systematic analysis of revealed that the energy transfer mechanism strongly depends on Dy3+ ions concentration and the host glass composition.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D
2014-01-01
To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.
Is a matrix exponential specification suitable for the modeling of spatial correlation structures?
Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha
2018-01-01
This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375
A model for hematopoietic death in man from irradiation of bone marrow during radioimmunotherapy.
Scott, B R; Dillehay, L E
1990-11-01
There are numerous institutions worldwide performing clinical trials of radioimmunotherapy (RIT) for cancer. For RIT, an exponentially decaying radionuclide is attached by using a chelating agent to a specific monoclonal or polyclonal tumour antibody (e.g. antiferritin IgG). The major limitation to RIT is toxicity to normal tissue in organs other than the one containing the tumour (e.g. bone marrow). The focus of this manuscript is on modelling the risk (or probability) of hematopoietic death in man for exponentially decaying patterns of high-energy beta irradiation (e.g. 90Y) of bone marrow by radioimmunoglobulin injected into the blood. The analytical solutions presented are only applicable to protocols for which significant uptake of radioactivity by the bone marrow does not occur, and only for high energy beta emitters. However, the generic equation used to obtain the analytical solutions is applicable to any continuous pattern of high energy beta irradiation. A model called the "normalized dose model" was used to generate calculated values for the LD50 as a function of the effective half-time for the radioimmunoglobulin in the blood. A less complicated empirical model was used to describe the calculated values. This model is presumed to be valid for effective half-times in blood of up to about 20 days. For longer effective half-times, the LD50 can be estimated using the normalized-dose model presented. In this manuscript, we also provide a modified Weibull model that allows estimation of the risk of hematopoietic death for single or multiple injections (in one cycle) of radioimmunoglobulin, for patients with normal susceptibility to irradiation and for patients with heightened susceptibility. With the modified Weibull model, the risk of hematopoietic death depends on the level of medical treatment provided to mitigate radiation injuries.
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.
2015-01-01
Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…
NASA Astrophysics Data System (ADS)
Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang
2016-07-01
Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.
Evidence for Two Neutrino Bursts from SN1987A
NASA Astrophysics Data System (ADS)
Valentim, Rodolfo; Horvath, Jorge E.; Rangel, Eraldo M.
The SN1987A in the Giant Magellanic Cloud was an amazing and extraordinary event because it was detected in real time for different neutrinos experiments (νs) around the world. Approximate ˜ 25 events were observed in three different experiments: Kamiokande II (KII) ˜ 12, Irvine-Michigan-Brookhaven (IMB) ˜ 8 e Baksan ˜ 5, plus a contrived burst at Mont Blanc (Liquid Scintillator Detector - LSD) later dismissed because of energetic requirements (Aglietta et al. 1988). The neutrinos have an important play role into the neutron star newborn: at the moment when the supernova explodes the compact object remnant is freezing by neutrinos ( ˜ 99% energy is lost in the few seconds of the explosion). The work is motivated by neutrinos’ event in relation arrival times where there is a temporal gap between set of events ( ˜ 6s). The first part of dataset came from the ordinary mechanism of freezing and the second part suggests different mechanism of neutrinos production. We tested two models of cooling for neutrinos from SN1987A: 1st an exponential cooling is an ordinary model of cooling and 2nd a two-step temperature model that it considers two bursts separated with temporal gap. Our analysis was done with Bayesian tools (Bayesian Information Criterion - BIC) The result showed strong evidence in favor of a two-step model against one single exponential cooling (ln Bij > 5.0), and suggests the existence of two neutrino bursts at the moment the neutron star was born.
Applicaton of the Calculating Formula for Mean Neutron Exposure on Barium stars
NASA Astrophysics Data System (ADS)
Zhang, F. H.; Zhang, L.; Cui, W. Y.; Zhang, B.
2017-11-01
Latest studies have shown that, in the s-process nucleosynthesis model for the low-mass asymptotic giant branch (AGB) star with (13C) pocket radiative burning during the interpulse period, the distribution of neutron exposures in the nucleosynthesis region can be regarded as an exponential function, and the relation between the mean neutron exposure (τ0) and the model parameters is (τ0} = - Δ τ/ln [q/(1 - r + q)]), in which (Δ τ) is the exposure value of each neutron irradiation, (r) is the overlap factor, and (q) is the mass ratio of the (13C) shell to the He intershell. In this paper the formula is applied to 26 samples of barium stars to test its reliability, and furthermore the neutron exposure nature in the AGB companion stars of 26 barium stars are analyzed. The results show that, the formula is reliable; in the AGB companion stars of 26 barium stars, at least 8 stars definitely have and 12 stars are highly likely to have exponential distribution of neutron exposures, while 4 stars tend to experience single neutron exposure; most of the AGB companion stars may have experienced fewer times of neutron irradiations before the element abundance distribution of the s-process comes to asymptotic condition.
Gosvami, N N; Bares, J A; Mangolini, F; Konicek, A R; Yablon, D G; Carpick, R W
2015-04-03
Zinc dialkyldithiophosphates (ZDDPs) form antiwear tribofilms at sliding interfaces and are widely used as additives in automotive lubricants. The mechanisms governing the tribofilm growth are not well understood, which limits the development of replacements that offer better performance and are less likely to degrade automobile catalytic converters over time. Using atomic force microscopy in ZDDP-containing lubricant base stock at elevated temperatures, we monitored the growth and properties of the tribofilms in situ in well-defined single-asperity sliding nanocontacts. Surface-based nucleation, growth, and thickness saturation of patchy tribofilms were observed. The growth rate increased exponentially with either applied compressive stress or temperature, consistent with a thermally activated, stress-assisted reaction rate model. Although some models rely on the presence of iron to catalyze tribofilm growth, the films grew regardless of the presence of iron on either the tip or substrate, highlighting the critical role of stress and thermal activation. Copyright © 2015, American Association for the Advancement of Science.
Spectral Modeling of the EGRET 3EG Gamma Ray Sources Near the Galactic Plane
NASA Technical Reports Server (NTRS)
Bertsch, D. L.; Hartman, R. C.; Hunter, S. D.; Thompson, D. J.; Lin, Y. C.; Kniffen, D. A.; Kanbach, G.; Mayer-Hasselwander, H. A.; Reimer, O.; Sreekumar, P.
1999-01-01
The third EGRET catalog lists 84 sources within 10 deg of the Galactic Plane. Five of these are well-known spin-powered pulsars, 2 and possibly 3 others are blazars, and the remaining 74 are classified as unidentified, although 6 of these are likely to be artifacts of nearby strong sources. Several of the remaining 68 unidentified sources have been noted as having positional agreement with supernovae remnants and OB associations. Others may be radio-quiet pulsars like Geminga, and still others may belong to a totally new class of sources. The question of the energy spectral distributions of these sources is an important clue to their identification. In this paper, the spectra of the sources within 10 deg of Galactic Plane are fit with three different functional forms; a single power law, two power laws, and a power law with an exponential cutoff. Where possible, the best fit is selected with statistical tests. Twelve, and possibly an additional 5 sources, are found to have spectra that are fit by a breaking power law or by the power law with exponential cutoff function.
Electron scattering wings on lines in interacting supernovae
NASA Astrophysics Data System (ADS)
Huang, Chenliang; Chevalier, Roger A.
2018-03-01
We consider the effect of electron scattering on lines emitted as a result of supernova interaction with a circumstellar medium, assuming that the scattering occurs in ionized gas in the pre-shock circumstellar medium. The single scattering case gives the broad component in the limit of low optical depth, showing a velocity full width half-maximum that is close to the thermal velocities of electrons. The line shape is approximately exponential at low velocities and steepens at higher velocities. At higher optical depths, the line profile remains exponential at low velocities, but wings strengthen with increasing optical depth. In addition to the line width, the ratio of narrow to broad (scattered) line strength is a possible diagnostic of the gas. The results depend on the density profile of the circumstellar gas, especially if the scattering and photon creation occur in different regions. We apply the scattering model to a number of supernovae, including Type IIn and Type Ia-circumstellar medium (CSM) events. The asymmetry to the red found in some cases can be explained by scattering in a fast wind region that is indicated by observations.
NASA Astrophysics Data System (ADS)
Panigrahi, Suraj Kumar; Mishra, Ashok Kumar
2017-09-01
A combination of broad-band UV radiation (UV A and UV B; 250-400 nm) and a stretched exponential function (StrEF) has been utilised in efforts towards convenient and sensitive detection of fluorescent dissolved organic matter (FDOM). This approach enables accessing the gross fluorescence spectral signature of both protein-like and humic-like components in a single measurement. Commercial FDOM components are excited with the broad-band UV excitation; the variation of spectral profile as a function of varying component ratio is analysed. The underlying fluorescence dynamics and non-linear quenching of amino acid moieties are studied with the StrEF (exp(-V[Q] β )). The complex quenching pattern reflects the inner filter effect (IFE) as well as inter-component interactions. The inter-component interactions are essentially captured through the ‘sphere of action’ and ‘dark complex’ models. The broad-band UV excitation ascertains increased excitation energy, resulting in increased population density in the excited state and thereby resulting in enhanced sensitivity.
NASA Astrophysics Data System (ADS)
Tamazian, A.; Nguyen, V. D.; Markelov, O. A.; Bogachev, M. I.
2016-07-01
We suggest a universal phenomenological description for the collective access patterns in the Internet traffic dynamics both at local and wide area network levels that takes into account erratic fluctuations imposed by cooperative user behaviour. Our description is based on the superstatistical approach and leads to the q-exponential inter-session time and session size distributions that are also in perfect agreement with empirical observations. The validity of the proposed description is confirmed explicitly by the analysis of complete 10-day traffic traces from the WIDE backbone link and from the local campus area network downlink from the Internet Service Provider. Remarkably, the same functional forms have been observed in the historic access patterns from single WWW servers. The suggested approach effectively accounts for the complex interplay of both “calm” and “bursty” user access patterns within a single-model setting. It also provides average sojourn time estimates with reasonable accuracy, as indicated by the queuing system performance simulation, this way largely overcoming the failure of Poisson modelling of the Internet traffic dynamics.
Channel response to sediment release: insights from a paired analysis of dam removal
Collins, Mathias J.; Snyder, Noah P.; Boardman, Graham; Banks, William S.; Andrews, Mary; Baker, Matthew E.; Conlon, Maricate; Gellis, Allen; McClain, Serena; Miller, Andrew; Wilcock, Peter
2017-01-01
Dam removals with unmanaged sediment releases are good opportunities to learn about channel response to abruptly increased bed material supply. Understanding these events is important because they affect aquatic habitats and human uses of floodplains. A longstanding paradigm in geomorphology holds that response rates to landscape disturbance exponentially decay through time. However, a previous study of the Merrimack Village Dam (MVD) removal on the Souhegan River in New Hampshire, USA, showed that an exponential function poorly described the early geomorphic response. Erosion of impounded sediments there was two-phased. We had an opportunity to quantitatively test the two-phase response model proposed for MVD by extending the record there and comparing it with data from the Simkins Dam removal on the Patapsco River in Maryland, USA. The watershed sizes are the same order of magnitude (102 km2), and at both sites low-head dams were removed (~3–4 m) and ~65 000 m3 of sand-sized sediments were discharged to low-gradient reaches. Analyzing four years of repeat morphometry and sediment surveys at the Simkins site, as well as continuous discharge and turbidity data, we observed the two-phase erosion response described for MVD. In the early phase, approximately 50% of the impounded sediment at Simkins was eroded rapidly during modest flows. After incision to base level and widening, a second phase began when further erosion depended on floods large enough to go over bank and access impounded sediments more distant from the newly-formed channel. Fitting functional forms to the data for both sites, we found that two-phase exponential models with changing decay constants fit the erosion data better than single-phase models. Valley width influences the two-phase erosion responses upstream, but downstream responses appear more closely related to local gradient, sediment re-supply from the upstream impoundments, and base flows.
MO-F-CAMPUS-I-05: Quantitative ADC Measurement of Esophageal Cancer Before and After Chemoradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; UT MD Anderson Cancer Center, Houston, TX; Son, JB
2015-06-15
Purpose: We investigated whether quantitative diffusion imaging can be used as an imaging biomarker for early prediction of treatment response of esophageal cancer. Methods: Eight patients with esophageal cancer underwent a baseline and an interim MRI studies during chemoradiation on a 3T whole body MRI scanner with an 8-channel torso phased array coil. Each MRI study contained two axial diffusion-weighted imaging (DWI) series with a conventional DWI sequence and a reduced field-of-view DWI sequence (FOCUS) of varying b-values. ADC maps with two b-values were computed from conventional DWI images using a mono-exponential model. For each of DWI sequences, separate ADCallmore » was computed by fitting the signal intensity of images with all the b-values to a single exponential model. For the FOCUS sequence, a bi-exponential model was used to extract perfusion and diffusion coefficients (ADCperf and ADCdiff) and their contributions to the signal decay. A board-certified radiologist contoured the tumor region and mean ADC values and standard deviations of tumor and muscle ROIs were recorded from different ADC maps. Results: Our results showed that (1) the magnitude of ADCs from the same ROIs by the different analysis methods can be substantially different. (2) For a given method, the change between the baseline and interim muscle ADCs was relatively small (≤10%). In contrast, the change between the baseline and interim tumor ADCs was substantially larger, with the change in ADCdiff by FOCUS DWI showing the largest percentage change of 73.2%. (3) The range of the relative change of a specific parameter for different patients was also different. Conclusion: Presently, we do not have the final pathological confirmation of the treatment response for all the patients. However, for a few patients whose surgical specimen is available, the quantitative ADC changes have been found to be useful as a potential predictor for treatment response.« less
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation
1990-05-01
process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cao, Boqiang; Zhang, Qimin; Ye, Ming
2016-11-29
We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
Confronting quasi-exponential inflation with WMAP seven
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Barun Kumar; Pal, Supratik; Basu, B., E-mail: barunp1985@rediffmail.com, E-mail: pal@th.physik.uni-bonn.de, E-mail: banasri@isical.ac.in
2012-04-01
We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK.
NASA Astrophysics Data System (ADS)
Hayat, Tanzila; Nadeem, S.
2018-03-01
This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.
Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve
2009-09-08
We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.
Evolution and mass extinctions as lognormal stochastic processes
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2014-10-01
In a series of recent papers and in a book, this author put forward a mathematical model capable of embracing the search for extra-terrestrial intelligence (SETI), Darwinian Evolution and Human History into a single, unified statistical picture, concisely called Evo-SETI. The relevant mathematical tools are: (1) Geometric Brownian motion (GBM), the stochastic process representing evolution as the stochastic increase of the number of species living on Earth over the last 3.5 billion years. This GBM is well known in the mathematics of finances (Black-Sholes models). Its main features are that its probability density function (pdf) is a lognormal pdf, and its mean value is either an increasing or, more rarely, decreasing exponential function of the time. (2) The probability distributions known as b-lognormals, i.e. lognormals starting at a certain positive instant b>0 rather than at the origin. These b-lognormals were then forced by us to have their peak value located on the exponential mean-value curve of the GBM (Peak-Locus theorem). In the framework of Darwinian Evolution, the resulting mathematical construction was shown to be what evolutionary biologists call Cladistics. (3) The (Shannon) entropy of such b-lognormals is then seen to represent the `degree of progress' reached by each living organism or by each big set of living organisms, like historic human civilizations. Having understood this fact, human history may then be cast into the language of b-lognormals that are more and more organized in time (i.e. having smaller and smaller entropy, or smaller and smaller `chaos'), and have their peaks on the increasing GBM exponential. This exponential is thus the `trend of progress' in human history. (4) All these results also match with SETI in that the statistical Drake equation (generalization of the ordinary Drake equation to encompass statistics) leads just to the lognormal distribution as the probability distribution for the number of extra-terrestrial civilizations existing in the Galaxy (as a consequence of the central limit theorem of statistics). (5) But the most striking new result is that the well-known `Molecular Clock of Evolution', namely the `constant rate of Evolution at the molecular level' as shown by Kimura's Neutral Theory of Molecular Evolution, identifies with growth rate of the entropy of our Evo-SETI model, because they both grew linearly in time since the origin of life. (6) Furthermore, we apply our Evo-SETI model to lognormal stochastic processes other than GBMs. For instance, we provide two models for the mass extinctions that occurred in the past: (a) one based on GBMs and (b) the other based on a parabolic mean value capable of covering both the extinction and the subsequent recovery of life forms. (7) Finally, we show that the Markov & Korotayev (2007, 2008) model for Darwinian Evolution identifies with an Evo-SETI model for which the mean value of the underlying lognormal stochastic process is a cubic function of the time. In conclusion: we have provided a new mathematical model capable of embracing molecular evolution, SETI and entropy into a simple set of statistical equations based upon b-lognormals and lognormal stochastic processes with arbitrary mean, of which the GBMs are the particular case of exponential growth.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
NASA Astrophysics Data System (ADS)
Deopa, Nisha; Rao, A. S.; Gupta, Mohini; Vijaya Prakash, G.
2018-01-01
Neodymium doped lithium lead alumino borate glasses were synthesized with the molar composition 10Li2Osbnd 10PbOsbnd (10-x) Al2O3sbnd 70B2O3sbnd x Nd2O3 (where, x = 0.1, 0.5, 1.0, 1.5, 2.0 and 2.5 mol %) via conventional melt quenching technique to understand their lasing potentialities using the absorption, emission and photoluminescence decay spectral measurements. The oscillator strengths measured from the absorption spectra were used to estimate the Judd-Ofelt intensity parameters using least square fitting procedure. The emission spectra recorded for the as-prepared glasses under investigation exhibit two emission transitions 4F3/2 → 4I11/2 (1063 nm) and 4F3/2 → 4I9/2 (1350 nm) for which radiative parameters have been evaluated. The emission intensity increases with increase in Nd3+ ion concentration up to 1 mol % and beyond concentration quenching took place. The decay profile shows single exponential nature for lower Nd3+ ion concentration and non-exponential for higher concentration. To elucidate the nature of energy transfer process, the non-exponential decay curves were well fitted to Inokuti-Hirayama model. The relatively higher values of emission cross-sections, branching ratios and quantum efficiency values obtained for 1.0 mol% of Nd3+ ions in LiPbAlB glass suggests it's aptness in generating lasing action at 1063 nm in NIR region.
Huang, Jian; Walcott, Gregory P; Ruse, Richard B; Bohanan, Scott J; Killingsworth, Cheryl R; Ideker, Raymond E
2012-09-11
We tested the hypothesis that the shape of the shock waveform affects not only the defibrillation threshold but also the amount of cardiac damage. Defibrillation thresholds were determined for 11 waveforms-3 ascending-ramp waveforms, 3 descending-ramp waveforms, 3 rectilinear first-phase biphasic waveforms, a Gurvich waveform, and a truncated exponential biphasic waveform-in 6 pigs with electrodes in the right ventricular apex and superior vena cava. The ascending, descending, and rectilinear waveforms had 4-, 8-, and 16-millisecond first phases and a 3.5-millisecond rectilinear second phase that was half the voltage of the first phase. The exponential biphasic waveform had a 60% first-phase and a 50% second-phase tilt. In a second study, we attempted to defibrillate after 10 seconds of ventricular fibrillation with a single ≈30-J shock (6 pigs successfully defibrillated with 8-millisecond ascending, 8-millisecond rectilinear, and truncated exponential biphasic waveforms). Troponin I blood levels were determined before and 2 to 10 hours after the shock. The lowest-energy defibrillation threshold was for the 8-milliseconds ascending ramp (14.6±7.3 J [mean±SD]), which was significantly less than for the truncated exponential (19.6±6.3 J). Six hours after shock, troponin I was significantly less for the ascending-ramp waveform (0.80±0.54 ng/mL) than for the truncated exponential (1.92±0.47 ng/mL) or the rectilinear waveform (1.17±0.45 ng/mL). The ascending ramp has a significantly lower defibrillation threshold and at ≈30 J causes 58% less troponin I release than the truncated exponential biphasic shock. Therefore, the shock waveform affects both the defibrillation threshold and the amount of cardiac damage.
Modulation of lens cell adhesion molecules by particle beams
NASA Technical Reports Server (NTRS)
McNamara, M. P.; Bjornstad, K. A.; Chang, P. Y.; Chou, W.; Lockett, S. J.; Blakely, E. A.
2001-01-01
Cell adhesion molecules (CAMs) are proteins which anchor cells to each other and to the extracellular matrix (ECM), but whose functions also include signal transduction, differentiation, and apoptosis. We are testing a hypothesis that particle radiations modulate CAM expression and this contributes to radiation-induced lens opacification. We observed dose-dependent changes in the expression of beta 1-integrin and ICAM-1 in exponentially-growing and confluent cells of a differentiating human lens epithelial cell model after exposure to particle beams. Human lens epithelial (HLE) cells, less than 10 passages after their initial culture from fetal tissue, were grown on bovine corneal endothelial cell-derived ECM in medium containing 15% fetal bovine serum and supplemented with 5 ng/ml basic fibroblast growth factor (FGF-2). Multiple cell populations at three different stages of differentiation were prepared for experiment: cells in exponential growth, and cells at 5 and 10 days post-confluence. The differentiation status of cells was characterized morphologically by digital image analysis, and biochemically by Western blotting using lens epithelial and fiber cell-specific markers. Cultures were irradiated with single doses (4, 8 or 12 Gy) of 55 MeV protons and, along with unirradiated control samples, were fixed using -20 degrees C methanol at 6 hours after exposure. Replicate experiments and similar experiments with helium ions are in progress. The intracellular localization of beta 1-integrin and ICAM-1 was detected by immunofluorescence using monoclonal antibodies specific for each CAM. Cells known to express each CAM were also processed as positive controls. Both exponentially-growing and confluent, differentiating cells demonstrated a dramatic proton-dose-dependent modulation (upregulation for exponential cells, downregulation for confluent cells) and a change in the intracellular distribution of the beta 1-integrin, compared to unirradiated controls. In contrast, there was a dose-dependent increase in ICAM-1 immunofluorescence in confluent, but not exponentially-growing cells. These results suggest that proton irradiation downregulates beta 1-integrin and upregulates ICAM-1, potentially contributing to cell death or to aberrant differentiation via modulation of anchorage and/or signal transduction functions. Quantification of the expression levels of the CAMs by Western analysis is in progress.
NASA Astrophysics Data System (ADS)
Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min
2017-11-01
The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n = 31), spleens (n = 31) and kidneys (n = 31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.
2014-01-01
Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318
Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian
2014-12-01
We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multivariate-Statistical Assessment of Heavy Metals for Agricultural Soils in Northern China
Yang, Pingguo; Yang, Miao; Mao, Renzhao; Shao, Hongbo
2014-01-01
The study evaluated eight heavy metals content and soil pollution from agricultural soils in northern China. Multivariate and geostatistical analysis approaches were used to determine the anthropogenic and natural contribution of soil heavy metal concentrations. Single pollution index and integrated pollution index could be used to evaluate soil heavy metal risk. The results show that the first factor explains 27.3% of the eight soil heavy metals with strong positive loadings on Cu, Zn, and Cd, which indicates that Cu, Zn, and Cd are associated with and controlled by anthropic activities. The average value of heavy metal is lower than the second grade standard values of soil environmental quality standards in China. Single pollution index is lower than 1, and the Nemerow integrated pollution index is 0.305, which means that study area has not been polluted. The semivariograms of soil heavy metal single pollution index fitted spherical and exponential models. The variable ratio of single pollution index showed moderately spatial dependence. Heavy metal contents showed relative safety in the study area. PMID:24892058
Wyllie, David J A; Béhé, Philippe; Colquhoun, David
1998-01-01
We have expressed recombinant NR1a/NR2A and NR1a/NR2D N-methyl-D-aspartate (NMDA) receptor channels in Xenopus oocytes and made recordings of single-channel and macroscopic currents in outside-out membrane patches. For each receptor type we measured (a) the individual single-channel activations evoked by low glutamate concentrations in steady-state recordings, and (b) the macroscopic responses elicited by brief concentration jumps with high agonist concentrations, and we explore the relationship between these two sorts of observation. Low concentration (5–100 nM) steady-state recordings of NR1a/NR2A and NR1a/NR2D single-channel activity generated shut-time distributions that were best fitted with a mixture of five and six exponential components, respectively. Individual activations of either receptor type were resolved as bursts of openings, which we refer to as ‘super-clusters’. During a single activation, NR1a/NR2A receptors were open for 36 % of the time, but NR1a/NR2D receptors were open for only 4 % of the time. For both, distributions of super-cluster durations were best fitted with a mixture of six exponential components. Their overall mean durations were 35.8 and 1602 ms, respectively. Steady-state super-clusters were aligned on their first openings and averaged. The average was well fitted by a sum of exponentials with time constants taken from fits to super-cluster length distributions. It is shown that this is what would be expected for a channel that shows simple Markovian behaviour. The current through NR1a/NR2A channels following a concentration jump from zero to 1 mM glutamate for 1 ms was well fitted by three exponential components with time constants of 13 ms (rising phase), 70 ms and 350 ms (decaying phase). Similar concentration jumps on NR1a/NR2D channels were well fitted by two exponentials with means of 45 ms (rising phase) and 4408 ms (decaying phase) components. During prolonged exposure to glutamate, NR1a/NR2A channels desensitized with a time constant of 649 ms, while NR1a/NR2D channels exhibited no apparent desensitization. We show that under certain conditions, the time constants for the macroscopic jump response should be the same as those for the distribution of super-cluster lengths, though the resolution of the latter is so much greater that it cannot be expected that all the components will be resolvable in a macroscopic current. Good agreement was found for jumps on NR1a/NR2D receptors, and for some jump experiments on NR1a/NR2A. However, the latter were rather variable and some were slower than predicted. Slow decays were associated with patches that had large currents. PMID:9625862
Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.
2017-01-01
Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400
Stochastic inflation lattice simulations - Ultra-large scale structure of the universe
NASA Technical Reports Server (NTRS)
Salopek, D. S.
1991-01-01
Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.
Stavn, R H
1988-01-15
The role of the Lambert-Beer law in ocean optics is critically examined. The Lambert-Beer law and the three-parameter model of the submarine light field are used to construct an optical energy budget for any hydrosol. It is further applied to the analytical exponential decay coefficient of the light field and used to estimate the optical properties and effects of the dissolved/suspended component in upper ocean layers. The concepts of the empirical exponential decay coefficient (diffuse attenuation coefficient) of the light field and a constant exponential decay coefficient for molecular water are analyzed quantitatively. A constant exponential decay coefficient for water is rejected. The analytical exponential decay coefficient is used to analyze optical gradients in ocean waters.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
Pseudomonas aeruginosa dose response and bathing water infection.
Roser, D J; van den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A
2014-03-01
Pseudomonas aeruginosa is the opportunistic pathogen mostly implicated in folliculitis and acute otitis externa in pools and hot tubs. Nevertheless, infection risks remain poorly quantified. This paper reviews disease aetiologies and bacterial skin colonization science to advance dose-response theory development. Three model forms are identified for predicting disease likelihood from pathogen density. Two are based on Furumoto & Mickey's exponential 'single-hit' model and predict infection likelihood and severity (lesions/m2), respectively. 'Third-generation', mechanistic, dose-response algorithm development is additionally scoped. The proposed formulation integrates dispersion, epidermal interaction, and follicle invasion. The review also details uncertainties needing consideration which pertain to water quality, outbreaks, exposure time, infection sites, biofilms, cerumen, environmental factors (e.g. skin saturation, hydrodynamics), and whether P. aeruginosa is endogenous or exogenous. The review's findings are used to propose a conceptual infection model and identify research priorities including pool dose-response modelling, epidermis ecology and infection likelihood-based hygiene management.
General mechanism of two-state protein folding kinetics.
Rollins, Geoffrey C; Dill, Ken A
2014-08-13
We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s.
Srinivasan, Prakash; Sarmah, Ajit K; Rohan, Maheswaran
2014-08-01
Single first-order (SFO) kinetic model is often used to derive the dissipation endpoints of an organic chemical in soil. This model is used due to its simplicity and requirement by regulatory agencies. However, using the SFO model for all types of decay pattern could lead to under- or overestimation of dissipation endpoints when the deviation from first-order is significant. In this study the performance of three biphasic kinetic models - bi-exponential decay (BEXP), first-order double exponential decay (FODED), and first-order two-compartment (FOTC) models was evaluated using dissipation datasets of sulfamethoxazole (SMO) antibiotic in three different soils under varying concentration, depth, temperature, and sterile conditions. Corresponding 50% (DT50) and 90% (DT90) dissipation times for the antibiotics were numerically obtained and compared against those obtained using the SFO model. The fit of each model to the measured values was evaluated based on an array of statistical measures such as coefficient of determination (R(2)adj), root mean square error (RMSE), chi-square (χ(2)) test at 1% significance, Bayesian Information Criteria (BIC) and % model error. Box-whisker residual plots were also used to compare the performance of each model to the measured datasets. The antibiotic dissipation was successfully predicted by all four models. However, the nonlinear biphasic models improved the goodness-of-fit parameters for all datasets. Deviations from datasets were also often less evident with the biphasic models. The fits of FOTC and FODED models for SMO dissipation datasets were identical in most cases, and were found to be superior to the BEXP model. Among the biphasic models, the FOTC model was found to be the most suitable for obtaining the endpoints and could provide a mechanistic explanation for SMO dissipation in the soils. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
Markov Analysis of Sleep Dynamics
NASA Astrophysics Data System (ADS)
Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.
2009-05-01
A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.
Edge Extraction by an Exponential Function Considering X-ray Transmission Characteristics
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Youp Synn, Sang; Cho, Sung Man; Jong Joo, Won
2011-04-01
3-D radiographic methodology has been into the spotlight for quality inspection of mass product or in-service inspection of aging product. To locate a target object in 3-D space, its characteristic contours such as edge length, edge angle, and vertices are very important. In spite of a simple geometry product, it is very difficult to get clear shape contours from a single radiographic image. The image contains scattering noise at the edges and ambiguity coming from X-Ray absorption within the body. This article suggests a concise method to extract whole edges from a single X-ray image. At the edge point of the object, the intensity of the X-ray decays exponentially as the X-ray penetrates the object. Considering this X-Ray decaying property, edges are extracted by using the least square fitting with the control of Coefficient of Determination.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Fracture analysis of a central crack in a long cylindrical superconductor with exponential model
NASA Astrophysics Data System (ADS)
Zhao, Yu Feng; Xu, Chi
2018-05-01
The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.
Bayesian Travel Time Inversion adopting Gaussian Process Regression
NASA Astrophysics Data System (ADS)
Mauerberger, S.; Holschneider, M.
2017-12-01
A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.
Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar
2005-11-04
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.
NASA Astrophysics Data System (ADS)
Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro
2016-04-01
High temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. Heat-wave indicators have been mainly developed with the aim of capturing the potential impacts on specific sectors (agriculture, health, wildfires, transport, power generation and distribution). However, the ability to capture the occurrence of extreme temperature events is an essential property of a multi-hazard extreme climate indicator. Aim of this study is to develop a standardized heat-wave indicator, that can be combined with other indices in order to describe multiple hazards in a single indicator. The proposed approach can be used in order to have a quantified indicator of the strenght of a certain extreme. As a matter of fact, extremes are usually distributed in exponential or exponential-exponential functions and it is difficult to quickly asses how strong was an extreme events considering only its magnitude. The proposed approach simplify the quantitative and qualitative communication of extreme magnitude
Analysis and IbM simulation of the stages in bacterial lag phase: basis for an updated definition.
Prats, Clara; Giró, Antoni; Ferrer, Jordi; López, Daniel; Vives-Rego, Josep
2008-05-07
The lag phase is the initial phase of a culture that precedes exponential growth and occurs when the conditions of the culture medium differ from the pre-inoculation conditions. It is usually defined by means of cell density because the number of individuals remains approximately constant or slowly increases, and it is quantified with the lag parameter lambda. The lag phase has been studied through mathematical modelling and by means of specific experiments. In recent years, Individual-based Modelling (IbM) has provided helpful insights into lag phase studies. In this paper, the definition of lag phase is thoroughly examined. Evolution of the total biomass and the total number of bacteria during lag phase is tackled separately. The lag phase lasts until the culture reaches a maximum growth rate both in biomass and cell density. Once in the exponential phase, both rates are constant over time and equal to each other. Both evolutions are split into an initial phase and a transition phase, according to their growth rates. A population-level mathematical model is presented to describe the transitional phase in cell density. INDividual DIScrete SIMulation (INDISIM) is used to check the outcomes of this analysis. Simulations allow the separate study of the evolution of cell density and total biomass in a batch culture, they provide a depiction of different observed cases in lag evolution at the individual-cell level, and are used to test the population-level model. The results show that the geometrical lag parameter lambda is not appropriate as a universal definition for the lag phase. Moreover, the lag phase cannot be characterized by a single parameter. For the studied cases, the lag phases of both the total biomass and the population are required to fully characterize the evolution of bacterial cultures. The results presented prove once more that the lag phase is a complex process that requires a more complete definition. This will be possible only after the phenomena governing the population dynamics at an individual level of description, and occurring during the lag and exponential growth phases, are well understood.
Minor, A V; Kaissling, K-E
2003-03-01
Olfactory receptor cells of the silkmoth Bombyx mori respond to single pheromone molecules with "elementary" electrical events that appear as discrete "bumps" a few milliseconds in duration, or bursts of bumps. As revealed by simulation, one bump may result from a series of random openings of one or several ion channels, producing an average inward membrane current of 1.5 pA. The distributions of durations of bumps and of gaps between bumps in a burst can be fitted by single exponentials with time constants of 10.2 ms and 40.5 ms, respectively. The distribution of burst durations is a sum of two exponentials; the number of bumps per burst obeyed a geometric distribution (mean 3.2 bumps per burst). Accordingly the elementary events could reflect transitions among three states of the pheromone receptor molecule: the vacant receptor (state 1), the pheromone-receptor complex (state 2), and the activated complex (state 3). The calculated rate constants of the transitions between states are k(21)=7.7 s(-1), k(23)=16.8 s(-1), and k(32)=98 s(-1).
NASA Astrophysics Data System (ADS)
Krugon, Seelam; Nagaraju, Dega
2017-05-01
This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.
A hybrid MD-kMC algorithm for folding proteins in explicit solvent.
Peter, Emanuel Karl; Shea, Joan-Emma
2014-04-14
We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.
Luo, Li; Luo, Le; Zhang, Xinli; He, Xiaoli
2017-07-10
Accurate forecasting of hospital outpatient visits is beneficial for the reasonable planning and allocation of healthcare resource to meet the medical demands. In terms of the multiple attributes of daily outpatient visits, such as randomness, cyclicity and trend, time series methods, ARIMA, can be a good choice for outpatient visits forecasting. On the other hand, the hospital outpatient visits are also affected by the doctors' scheduling and the effects are not pure random. Thinking about the impure specialty, this paper presents a new forecasting model that takes cyclicity and the day of the week effect into consideration. We formulate a seasonal ARIMA (SARIMA) model on a daily time series and then a single exponential smoothing (SES) model on the day of the week time series, and finally establish a combinatorial model by modifying them. The models are applied to 1 year of daily visits data of urban outpatients in two internal medicine departments of a large hospital in Chengdu, for forecasting the daily outpatient visits about 1 week ahead. The proposed model is applied to forecast the cross-sectional data for 7 consecutive days of daily outpatient visits over an 8-weeks period based on 43 weeks of observation data during 1 year. The results show that the two single traditional models and the combinatorial model are simplicity of implementation and low computational intensiveness, whilst being appropriate for short-term forecast horizons. Furthermore, the combinatorial model can capture the comprehensive features of the time series data better. Combinatorial model can achieve better prediction performance than the single model, with lower residuals variance and small mean of residual errors which needs to be optimized deeply on the next research step.
Investigation of Co-rotation Lag in Saturn's Dayside Magnetosphere and Comparison with the Nightside
NASA Astrophysics Data System (ADS)
Smith, E. J.; Dougherty, M. K.
2016-12-01
Two previous studies of co-rotation lag concentrated on 13 identical high-inclination Cassini orbits. In the first, measurements of the magnetospheric field azimuthal component, Bϕ, were restricted to the southern hemisphere, near midnight, from the equator and perikron to maximum latitude 70°. Comparison with the prevailing model of the magnetosphere-ionosphere interaction yielded conclusions that the ionospheric conductivity, Σp, was independent of ionospheric co-latitude, θi, and the ratio of magnetospheric to planetary field angular velocities, ω/Ωs, equaled, 1- exp(-Bθi), an unexpected exponential dependence on a single parameter. Both model parameters exhibited significant temporal variations from orbit to orbit leading to variations in the ionospheric profiles of Pedersen current, Ip. The second 13 orbit study of Bϕ extended to the north hemisphere where lagging fields alternated with leading and co-rotating fields. It was concluded that the difference was actually a local- time dependence with lagging -fields- only occurring after midnight and the mixed rotations before midnight. Again, Σp was independent of θi and ω/Ωs = 1- exp(-Bθi). Both studies raised the questions: How general is the exponential dependence of 1-ω/Ωs? Is it restricted to midnight or hold as well in the dayside magnetosphere? What is the cause of this dependence that differs from the model? The analysis of Bϕ has been extended to four nearly-identical north-south orbits near noon. The results and conclusions of this third study will be reported.
NASA Astrophysics Data System (ADS)
Ivashchuk, V. D.; Ernazarov, K. K.
2017-01-01
A (n + 1)-dimensional gravitational model with cosmological constant and Gauss-Bonnet term is studied. The ansatz with diagonal cosmological metrics is adopted and solutions with exponential dependence of scale factors: ai ˜ exp (vit), i = 1, …, n, are considered. The stability analysis of the solutions with non-static volume factor is presented. We show that the solutions with v 1 = v 2 = v 3 = H > 0 and small enough variation of the effective gravitational constant G are stable if certain restriction on (vi ) is obeyed. New examples of stable exponential solutions with zero variation of G in dimensions D = 1 + m + 2 with m > 2 are presented.
NASA Astrophysics Data System (ADS)
Elmegreen, Bruce G.
2016-10-01
Exponential radial profiles are ubiquitous in spiral and dwarf Irregular galaxies, but the origin of this structural form is not understood. This talk will review the observations of exponential and double exponential disks, considering both the light and the mass profiles, and the contributions from stars and gas. Several theories for this structure will also be reviewed, including primordial collapse, bar and spiral torques, clump torques, galaxy interactions, disk viscosity and other internal processes of angular momentum exchange, and stellar scattering off of clumpy structure. The only process currently known that can account for this structure in the most theoretically difficult case is stellar scattering off disks clumps. Stellar orbit models suggest that such scattering can produce exponentials even in isolated dwarf irregulars that have no bars or spirals, little shear or viscosity, and profiles that go out too far for the classical Mestel case of primordial collapse with specific angular momentum conservation.
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
Sleep, John; Irving, Malcolm; Burton, Kevin
2005-03-15
The time course of isometric force development following photolytic release of ATP in the presence of Ca(2+) was characterized in single skinned fibres from rabbit psoas muscle. Pre-photolysis force was minimized using apyrase to remove contaminating ATP and ADP. After the initial force rise induced by ATP release, a rapid shortening ramp terminated by a step stretch to the original length was imposed, and the time course of the subsequent force redevelopment was again characterized. Force development after ATP release was accurately described by a lag phase followed by one or two exponential components. At 20 degrees C, the lag was 5.6 +/- 0.4 ms (s.e.m., n = 11), and the force rise was well fitted by a single exponential with rate constant 71 +/- 4 s(-1). Force redevelopment after shortening-restretch began from about half the plateau force level, and its single-exponential rate constant was 68 +/- 3 s(-1), very similar to that following ATP release. When fibres were activated by the addition of Ca(2+) in ATP-containing solution, force developed more slowly, and the rate constant for force redevelopment following shortening-restretch reached a maximum value of 38 +/- 4 s(-1) (n = 6) after about 6 s of activation. This lower value may be associated with progressive sarcomere disorder at elevated temperature. Force development following ATP release was much slower at 5 degrees C than at 20 degrees C. The rate constant of a single-exponential fit to the force rise was 4.3 +/- 0.4 s(-1) (n = 22), and this was again similar to that after shortening-restretch in the same activation at this temperature, 3.8 +/- 0.2 s(-1). We conclude that force development after ATP release and shortening-restretch are controlled by the same steps in the actin-myosin ATPase cycle. The present results and much previous work on mechanical-chemical coupling in muscle can be explained by a kinetic scheme in which force is generated by a rapid conformational change bracketed by two biochemical steps with similar rate constants -- ATP hydrolysis and the release of inorganic phosphate -- both of which combine to control the rate of force development.
Kinetic market models with single commodity having price fluctuations
NASA Astrophysics Data System (ADS)
Chatterjee, A.; Chakrabarti, B. K.
2006-12-01
We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Palombo, Marco; Gabrielli, Andrea; De Santis, Silvia; Capuani, Silvia
2012-03-01
In this paper, we investigate the image contrast that characterizes anomalous and non-gaussian diffusion images obtained using the stretched exponential model. This model is based on the introduction of the γ stretched parameter, which quantifies deviation from the mono-exponential decay of diffusion signal as a function of the b-value. To date, the biophysical substrate underpinning the contrast observed in γ maps, in other words, the biophysical interpretation of the γ parameter (or the fractional order derivative in space, β parameter) is still not fully understood, although it has already been applied to investigate both animal models and human brain. Due to the ability of γ maps to reflect additional microstructural information which cannot be obtained using diffusion procedures based on gaussian diffusion, some authors propose this parameter as a measure of diffusion heterogeneity or water compartmentalization in biological tissues. Based on our recent work we suggest here that the coupling between internal and diffusion gradients provide pseudo-superdiffusion effects which are quantified by the stretching exponential parameter γ. This means that the image contrast of Mγ maps reflects local magnetic susceptibility differences (Δχ(m)), thus highlighting better than T(2)(∗) contrast the interface between compartments characterized by Δχ(m). Thanks to this characteristic, Mγ imaging may represent an interesting tool to develop contrast-enhanced MRI for molecular imaging. The spectroscopic and imaging experiments (performed in controlled micro-beads dispersion) that are reported here, strongly suggest internal gradients, and as a consequence Δχ(m), to be an important factor in fully understanding the source of contrast in anomalous diffusion methods that are based on a stretched exponential model analysis of diffusion data obtained at varying gradient strengths g. Copyright © 2012 Elsevier Inc. All rights reserved.
A note on free and forced Rossby wave solutions: The case of a straight coast and a channel
NASA Astrophysics Data System (ADS)
Graef, Federico
2017-03-01
The free Rossby wave (RW) solutions in an ocean with a straight coast when the offshore wavenumber of incident (l1) and reflected (l2) wave are equal or complex are discussed. If l1 = l2 the energy streams along the coast and a uniformly valid solution cannot be found; if l1,2 are complex it yields the sum of an exponentially decaying and growing (away from the coast) Rossby wave. The channel does not admit these solutions as free modes. If the wavenumber vectors of the RWs are perpendicular to the coast, the boundary condition of no normal flow is trivially satisfied and the value of the streamfunction does not need to vanish at the coast. A solution that satisfies Kelvin's theorem of time-independent circulation at the coast is proposed. The forced RW solutions when the ocean's forcing is a single Fourier component are studied. If the forcing is resonant, i.e. a free Rossby wave (RW), the linear response will depend critically on whether the wave carries energy perpendicular to the channel or not. In the first case, the amplitude of the response is linear in the direction normal to the channel, y, and in the second it has a parabolic profile in y. Examples of these solutions are shown for channels with parameters resembling the Mozambique Channel, the Tasman Sea, the Denmark Strait and the English Channel. The solutions for the single coast are unbounded, except when the forcing is a RW trapped against the coast. If the forcing is non-resonant, exponentially decaying or trapped RWs could be excited in the coast and both the exponentially ;decaying; and exponentially ;growing; RW could be excited in the channel.
Low temperature time resolved photoluminescence in ordered and disordered Cu2ZnSnS4 single crystals
NASA Astrophysics Data System (ADS)
Raadik, Taavi; Krustok, Jüri; Kauk-Kuusik, M.; Timmo, K.; Grossberg, M.; Ernits, K.; Bleuse, J.
2017-03-01
In this work we performed time-resolved micro-photoluminescence (TRPL) studies of Cu2ZnSnS4 (CZTS) single crystals grown in molten KI salt. The order/disorder degree of CZTS was varied by the thermal post treatment temperature. Photoluminescence spectra measured at T=8 K showed an asymmetric band with a peak position of 1.33 eV and 1.27 eV for partially ordered and disordered structures, respectively. Thermal activation energies were found to be ET (PO) =65±9 meV for partially ordered and ET (PD) =27±4 meV for partially disordered. These low activation energy values indicating to the defect cluster recombination model for both partially ordered and disordered structures. TRPL was measured for both crystals and their decay curves were fitted with a stretched exponential function, in order to describe the charge carriers' recombination dynamics at low temperature.
Bias-induced modulation of ultrafast carrier dynamics in metallic single-walled carbon nanotubes
NASA Astrophysics Data System (ADS)
Maekawa, Keisuke; Yanagi, Kazuhiro; Minami, Yasuo; Kitajima, Masahiro; Katayama, Ikufumi; Takeda, Jun
2018-02-01
The gate bias dependence of excited-state relaxation dynamics in metallic single-walled carbon nanotubes (MCNTs) was investigated using pump-probe transient absorption spectroscopy coupled with electrochemical doping through an ionic liquid. The transient transmittance decayed exponentially with the pump-probe delay time, whose value could be tuned via the Fermi-level modulation of Dirac electrons under a bias voltage. The obtained relaxation time was the shortest when the Fermi level was at the Dirac point of the MCNTs, and exhibited a U-shaped dependence on the bias voltage. Because optical dipole transitions between the Dirac bands are forbidden in MCNTs, the observed dynamics were attributed to carrier relaxation from the E11 band to the Dirac band. Using a model that considers the suppression of electron-electron scattering (impact ionization) due to Pauli blocking, we could qualitatively explain the obtained bias dependence of the relaxation time.
Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).
Namiki, C; Katsuragawa, M; Zani-Teixeira, M L
2015-04-01
The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.
1/f oscillations in a model of moth populations oriented by diffusive pheromones
NASA Astrophysics Data System (ADS)
Barbosa, L. A.; Martins, M. L.; Lima, E. R.
2005-01-01
An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
The Use of Modeling Approach for Teaching Exponential Functions
NASA Astrophysics Data System (ADS)
Nunes, L. F.; Prates, D. B.; da Silva, J. M.
2017-12-01
This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.
Klein, F.W.; Wright, Tim
2008-01-01
The remarkable catalog of Hawaiian earthquakes going back to the 1820s is based on missionary diaries, newspaper accounts, and instrumental records and spans the great M7.9 Kau earthquake of April 1868 and its aftershock sequence. The earthquake record since 1868 defines a smooth curve complete to M5.2 of the declining rate into the 21st century, after five short volcanic swarms are removed. A single aftershock curve fits the earthquake record, even with numerous M6 and 7 main shocks and eruptions. The timing of some moderate earthquakes may be controlled by magmatic stresses, but their overall long-term rate reflects one of aftershocks of the Kau earthquake. The 1868 earthquake is, therefore, the largest and most controlling stress event in the 19th and 20th centuries. We fit both the modified Omori (power law) and stretched exponential (SE) functions to the earthquakes. We found that the modified Omori law is a good fit to the M ??? 5.2 earthquake rate for the first 10 years or so and the more rapidly declining SE function fits better thereafter, as supported by three statistical tests. The switch to exponential decay suggests that a possible change in aftershock physics may occur from rate and state fault friction, with no change in the stress rate, to viscoelastic stress relaxation. The 61-year exponential decay constant is at the upper end of the range of geodetic relaxation times seen after other global earthquakes. Modeling deformation in Hawaii is beyond the scope of this paper, but a simple interpretation of the decay suggests an effective viscosity of 1019 to 1020 Pa s pertains in the volcanic spreading of Hawaii's flanks. The rapid decline in earthquake rate poses questions for seismic hazard estimates in an area that is cited as one of the most hazardous in the United States.
2016-01-01
Muscle contractions are generated by cyclical interactions of myosin heads with actin filaments to form the actomyosin complex. To simulate actomyosin complex stable states, mathematical models usually define an energy landscape with a corresponding number of wells. The jumps between these wells are defined through rate constants. Almost all previous models assign these wells an infinite sharpness by imposing a relatively simple expression for the detailed balance, i.e., the ratio of the rate constants depends exponentially on the sole myosin elastic energy. Physically, this assumption corresponds to neglecting thermal fluctuations in the actomyosin complex stable states. By comparing three mathematical models, we examine the extent to which this hypothesis affects muscle model predictions at the single cross-bridge, single fiber, and organ levels in a ceteris paribus analysis. We show that including fluctuations in stable states allows the lever arm of the myosin to easily and dynamically explore all possible minima in the energy landscape, generating several backward and forward jumps between states during the lifetime of the actomyosin complex, whereas the infinitely sharp minima case is characterized by fewer jumps between states. Moreover, the analysis predicts that thermal fluctuations enable a more efficient contraction mechanism, in which a higher force is sustained by fewer attached cross-bridges. PMID:27626630
Quantum matter bounce with a dark energy expanding phase
NASA Astrophysics Data System (ADS)
Colin, Samuel; Pinto-Neto, Nelson
2017-09-01
Analyzing quantum cosmological scenarios containing one scalar field with exponential potential, we have obtained a universe model which realizes a classical dust contraction from very large scales, the initial repeller of the model, and moves to a stiff matter contraction near the singularity, which is avoided due to a quantum bounce. The universe is then launched in a stiff matter expanding phase, which then moves to a dark energy era, finally returning to the dust expanding phase, the final attractor of the model. Hence, one has obtained a nonsingular cosmological model where a single scalar field can describe both the matter contracting phase of a bouncing model, necessary to give an almost scale invariant spectrum of scalar cosmological perturbations, and a transient expanding dark energy phase. As the universe is necessarily dust dominated in the far past, usual adiabatic vacuum initial conditions can be easily imposed in this era, avoiding the usual issues appearing when dark energy is considered in bouncing models.
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
Distribution of fixed beneficial mutations and the rate of adaptation in asexual populations
Good, Benjamin H.; Rouzine, Igor M.; Balick, Daniel J.; Hallatschek, Oskar; Desai, Michael M.
2012-01-01
When large asexual populations adapt, competition between simultaneously segregating mutations slows the rate of adaptation and restricts the set of mutations that eventually fix. This phenomenon of interference arises from competition between mutations of different strengths as well as competition between mutations that arise on different fitness backgrounds. Previous work has explored each of these effects in isolation, but the way they combine to influence the dynamics of adaptation remains largely unknown. Here, we describe a theoretical model to treat both aspects of interference in large populations. We calculate the rate of adaptation and the distribution of fixed mutational effects accumulated by the population. We focus particular attention on the case when the effects of beneficial mutations are exponentially distributed, as well as on a more general class of exponential-like distributions. In both cases, we show that the rate of adaptation and the influence of genetic background on the fixation of new mutants is equivalent to an effective model with a single selection coefficient and rescaled mutation rate, and we explicitly calculate these effective parameters. We find that the effective selection coefficient exactly coincides with the most common fixed mutational effect. This equivalence leads to an intuitive picture of the relative importance of different types of interference effects, which can shift dramatically as a function of the population size, mutation rate, and the underlying distribution of fitness effects. PMID:22371564
Duchêne, Gaëtan; Peeters, Frank; Peeters, André; Duprez, Thierry
2017-08-01
To compare the sensitivity and early temporal changes of diffusion parameters obtained from diffusion tensor imaging (DTI), diffusional kurtosis imaging (DKI), q-space analysis (QSA) and bi-exponential modelling in hyperacute stroke patients. A single investigational acquisition allowing the four diffusion analyses was performed on seven hyperacute stroke patients with a 3T system. The percentage change between ipsi- and contralateral regions were compared at admission and 24 h later. Two out of the seven patients were imaged every 6 h during this period. Kurtoses from both DKI and QSA were the most sensitive of the tested diffusion parameters in the few hours following ischemia. An early increase-maximum-decrease pattern of evolution was highlighted during the 24-h period for all parameters proportional to diffusion coefficients. A similar pattern was observed for both kurtoses in only one of two patients. Our comparison was performed using identical diffusion encoding timings and on patients in the same stage of their condition. Although preliminary, our findings confirm those of previous studies that showed enhanced sensitivity of kurtosis. A fine time mapping of diffusion metrics in hyperacute stroke patients was presented which advocates for further investigations on larger animal or human cohorts.
Event-driven simulations of nonlinear integrate-and-fire neurons.
Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique
2007-12-01
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.
A non-Gaussian option pricing model based on Kaniadakis exponential deformation
NASA Astrophysics Data System (ADS)
Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara
2017-09-01
A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.
NASA Astrophysics Data System (ADS)
Fox, J. B.; Thayer, D. W.; Phillips, J. G.
The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.
NASA Technical Reports Server (NTRS)
Conkin, J.; Gernhardt, M. L.; Powell, M. R.
2004-01-01
Not enough is known about the increased risk of hypobaric decompression sickness (DCS) and production of venous (VGE) and arterial (AGE) gas emboli following an air break in an otherwise normal 100% resting oxygen (O2) prebreathe (PB), and certainly a break in PB when exercise is used to accelerate nitrogen (N2) elimination from the tissues. Current Aeromedical Flight Rules at the Johnson Space Center about additional PB payback times are untested, possibly too conservative, and therefore not optimized for operational use. A 10 min air break at 90 min into a 120 min PB that includes initial dual-cycle ergometry for 10 min will show a measurable increase in the risk of DCS and VGE after ascent to 4.3 psia compared to a 10 min break at 15 min into the PB, or when there is no break in PB. Data collection with humans begins in 2005, but here we first evaluate the hypothesis using three models of tissue N2 kinetics: Model I is a simple single half-time compartment exponential model, Model II is a three compartment half-time exponential model, and Model III is a variable half-time compartment model where the percentage of maximum O2 consumption for the subject during dual-cycle ergometry exercise defines the half-time compartment. Model I with large rate constants to simulate an exercise effect always showed a late break in PB had the greatest consequence. Model II showed an early break had the greatest consequence. Model III showed there was no difference between early or late break in exercise PB. Only one of these outcomes will be observed when humans are tested. Our results will favor one of these models, and so advance our understanding of tissue N2 kinetics, and of altitude DCS after an air break in PB.
Zhuo, Lin; Tao, Hong; Wei, Hong; Chengzhen, Wu
2016-01-01
We tried to establish compatible carbon content models of individual trees for a Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) plantation from Fujian province in southeast China. In general, compatibility requires that the sum of components equal the whole tree, meaning that the sum of percentages calculated from component equations should equal 100%. Thus, we used multiple approaches to simulate carbon content in boles, branches, foliage leaves, roots and the whole individual trees. The approaches included (i) single optimal fitting (SOF), (ii) nonlinear adjustment in proportion (NAP) and (iii) nonlinear seemingly unrelated regression (NSUR). These approaches were used in combination with variables relating diameter at breast height (D) and tree height (H), such as D, D2H, DH and D&H (where D&H means two separate variables in bivariate model). Power, exponential and polynomial functions were tested as well as a new general function model was proposed by this study. Weighted least squares regression models were employed to eliminate heteroscedasticity. Model performances were evaluated by using mean residuals, residual variance, mean square error and the determination coefficient. The results indicated that models with two dimensional variables (DH, D2H and D&H) were always superior to those with a single variable (D). The D&H variable combination was found to be the most useful predictor. Of all the approaches, SOF could establish a single optimal model separately, but there were deviations in estimating results due to existing incompatibilities, while NAP and NSUR could ensure predictions compatibility. Simultaneously, we found that the new general model had better accuracy than others. In conclusion, we recommend that the new general model be used to estimate carbon content for Chinese fir and considered for other vegetation types as well. PMID:26982054
Estimation of renal allograft half-life: fact or fiction?
Azancot, M Antonieta; Cantarell, Carme; Perelló, Manel; Torres, Irina B; Serón, Daniel; Seron, Daniel; Moreso, Francesc; Arias, Manuel; Campistol, Josep M; Curto, Jordi; Hernandez, Domingo; Morales, José M; Sanchez-Fructuoso, Ana; Abraira, Victor
2011-09-01
Renal allograft half-life time (t½) is the most straightforward representation of long-term graft survival. Since some statistical models overestimate this parameter, we compare different approaches to evaluate t½. Patients with a 1-year functioning graft transplanted in Spain during 1990, 1994, 1998 and 2002 were included. Exponential, Weibull, gamma, lognormal and log-logistic models censoring the last year of follow-up were evaluated. The goodness of fit of these models was evaluated according to the Cox-Snell residuals and the Akaike's information criterion (AIC) was employed to compare these models. We included 4842 patients. Real t½ in 1990 was 14.2 years. Median t½ (95% confidence interval) in 1990 and 2002 was 15.8 (14.2-17.5) versus 52.6 (35.6-69.5) according to the exponential model (P < 0.001). No differences between 1990 and 2002 were observed when t½ was estimated with the other models. In 1990 and 2002, t½ was 14.0 (13.1-15.0) versus 18.0 (13.7-22.4) according to Weibull, 15.5 (13.9-17.1) versus 19.1 (15.6-22.6) according to gamma, 14.4 (13.3-15.6) versus 18.3 (14.2-22.3) according to the log-logistic and 15.2 (13.8-16.6) versus 18.8 (15.3-22.3) according to the lognormal models. The AIC confirmed that the exponential model had the lowest goodness of fit, while the other models yielded a similar result. The exponential model overestimates t½, especially in cohorts of patients with a short follow-up, while any of the other studied models allow a better estimation even in cohorts with short follow-up.
Resistance distribution in the hopping percolation model.
Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad
2005-07-01
We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.
Effect of tropospheric aerosols upon atmospheric infrared cooling rates
NASA Technical Reports Server (NTRS)
Harshvardhan, MR.; Cess, R. D.
1978-01-01
The effect of tropospheric aerosols on atmospheric infrared cooling rates is investigated by the use of recent models of infrared gaseous absorption. A radiative model of the atmosphere that incorporates dust as an absorber and scatterer of infrared radiation is constructed by employing the exponential kernel approximation to the radiative transfer equation. Scattering effects are represented in terms of a single scattering albedo and an asymmetry factor. The model is applied to estimate the effect of an aerosol layer made of spherical quartz particles on the infrared cooling rate. Calculations performed for a reference wavelength of 0.55 microns show an increased greenhouse effect, where the net upward flux at the surface is reduced by 10% owing to the strongly enhanced downward emission. There is a substantial increase in the cooling rate near the surface, but the mean cooling rate throughout the lower troposphere was only 10%.
Quasiclassical treatment of the Auger effect in slow ion-atom collisions
NASA Astrophysics Data System (ADS)
Frémont, F.
2017-09-01
A quasiclassical model based on the resolution of Hamilton equations of motion is used to get evidence for Auger electron emission following double-electron capture in 150-keV N e10 ++He collisions. Electron-electron interaction is taken into account during the collision by using pure Coulombic potential. To make sure that the helium target is stable before the collision, phenomenological potentials for the electron-nucleus interactions that simulate the Heisenberg principle are included in addition to the Coulombic potential. First, single- and double-electron captures are determined and compared with previous experiments and theories. Then, integration time evolution is calculated for autoionizing and nonautoionizing double capture. In contrast with single capture, the number of electrons originating from autoionization slowly increases with integration time. A fit of the calculated cross sections by means of an exponential function indicates that the average lifetime is 4.4 ×10-3a .u . , in very good agreement with the average lifetime deduced from experiments and a classical model introduced to calculate individual angular momentum distributions. The present calculation demonstrates the ability of classical models to treat the Auger effect, which is a pure quantum effect.
Kartalis, Nikolaos; Manikis, Georgios C; Loizou, Louiza; Albiin, Nils; Zöllner, Frank G; Del Chiaro, Marco; Marias, Kostas; Papanikolaou, Nikolaos
2016-01-01
To compare two Gaussian diffusion-weighted MRI (DWI) models including mono-exponential and bi-exponential, with the non-Gaussian kurtosis model in patients with pancreatic ductal adenocarcinoma. After written informed consent, 15 consecutive patients with pancreatic ductal adenocarcinoma underwent free-breathing DWI (1.5T, b-values: 0, 50, 150, 200, 300, 600 and 1000 s/mm 2 ). Mean values of DWI-derived metrics ADC, D, D*, f, K and D K were calculated from multiple regions of interest in all tumours and non-tumorous parenchyma and compared. Area under the curve was determined for all metrics. Mean ADC and D K showed significant differences between tumours and non-tumorous parenchyma (both P < 0.001). Area under the curve for ADC, D, D*, f, K, and D K were 0.77, 0.52, 0.53, 0.62, 0.42, and 0.84, respectively. ADC and D K could differentiate tumours from non-tumorous parenchyma with the latter showing a higher diagnostic accuracy. Correction for kurtosis effects has the potential to increase the diagnostic accuracy of DWI in patients with pancreatic ductal adenocarcinoma.
TREFEX: Trend Estimation and Change Detection in the Response of MOX Gas Sensors
Pashami, Sepideh; Lilienthal, Achim J.; Schaffernicht, Erik; Trincavelli, Marco
2013-01-01
Many applications of metal oxide gas sensors can benefit from reliable algorithms to detect significant changes in the sensor response. Significant changes indicate a change in the emission modality of a distant gas source and occur due to a sudden change of concentration or exposure to a different compound. As a consequence of turbulent gas transport and the relatively slow response and recovery times of metal oxide sensors, their response in open sampling configuration exhibits strong fluctuations that interfere with the changes of interest. In this paper we introduce TREFEX, a novel change point detection algorithm, especially designed for metal oxide gas sensors in an open sampling system. TREFEX models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We formulate non-linear trend filtering and change point detection as a parameter-free convex optimization problem for single sensors and sensor arrays. We evaluate the performance of the TREFEX algorithm experimentally for different metal oxide sensors and several gas emission profiles. A comparison with the previously proposed GLR method shows a clearly superior performance of the TREFEX algorithm both in detection performance and in estimating the change time. PMID:23736853
Formaldehyde sorption and desorption characteristics of gypsum wallboard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, T.G.; Hawthorne, A.R.; Thompson, C.V.
1987-07-01
The sorption and subsequent desorption of formaldehyde (CH/sub 2/O) vapor from unpainted gypsum wallboard have been investigated in environmental chamber experiments conducted at 23 /sup 0/C, 50% relative humidity, an air exchange to board loading ratio of 0.43 m/h, and CH/sub 2/O concentrations ranging from 0 to 0.50 mg/m/sup 3/. Both CH/sub 2/O sorption and CH/sub 2/O desorption processes are described by a three-parameter, single-exponential model with an exponential lifetime of 2.9 +/- 0.1 days. The storage capacity of gypsum board for CH/sub 2/O vapor results in a time-dependent buffer to changes in CH/sub 2/O vapor concentration surrounding the boardmore » but appears to cause only a weak, permanent loss mechanism for CH/sub 2/O vapor. Prior to significant depletion of sorbed CH/sub 2/O, desorption rates from CH/sub 2/O-exposed gypsum board exhibit a linear dependence with negative slope on CH/sub 2/O vapor concentration. Analogous CH/sub 2/O emissions properties have been observed for pressed-wood products bonded with urea-formaldehyde resins. 17 references, 5 figures.« less
Formaldehyde sorption and desorption characteristics of gypsum wallboard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, T.G.; Hawthorne, A.R.; Thompson, C.V.
1986-01-01
The sorption and subsequent desorption of formaldehyde (CH/sub 2/O) vapor from unpainted gypsum wallboard has been investigated in environmental chamber experiments conducted at 23/sup 0/C, 50% relative humidity, an air exchange to board loading ratio of 0.43 m/h, and CH/sub 2/O concentrations ranging from 0 to 0.50 mg/m/sup 3/. Both CH/sub 2/O sorption and desorption processes are described using a three-parameter, single-exponential model with an exponential lifetime of 2.9 +- 0.1 days. The storage capacity of gypsum board for CH/sub 2/O vapor results in a time-dependent buffer to changes in CH/sub 2/O vapor concentration surrounding the board, but appears tomore » cause only a weak, permanent loss mechanism for CH/sub 2/O vapor. Short-term CH/sub 2/O desorption rates from CH/sub 2/O-exposed gypsum board (prior to significant depletion of sorbed CH/sub 2/O) exhibit a linear dependence with negative slope on CH/sub 2/O vapor concentration analogous to CH/sub 2/O emissions from pressed-wood products bonded with urea-formaldehyde resins.« less
NASA Astrophysics Data System (ADS)
Cao, Jinde; Wang, Yanyan
2010-05-01
In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
The coherent interlayer resistance of a single, rotated interface between two stacks of AB graphite
NASA Astrophysics Data System (ADS)
Habib, K. M. Masum; Sylvia, Somaia S.; Ge, Supeng; Neupane, Mahesh; Lake, Roger K.
2013-12-01
The coherent, interlayer resistance of a misoriented, rotated interface between two stacks of AB graphite is determined for a variety of misorientation angles. The quantum-resistance of the ideal AB stack is on the order of 1 to 10 mΩ μm2. For small rotation angles, the coherent interlayer resistance exponentially approaches the ideal quantum resistance at energies away from the charge neutrality point. Over a range of intermediate angles, the resistance increases exponentially with cell size for minimum size unit cells. Larger cell sizes, of similar angles, may not follow this trend. The energy dependence of the interlayer transmission is described.
NASA Astrophysics Data System (ADS)
Meunier, Félicien; Couvreur, Valentin; Draye, Xavier; Zarebanadkouki, Mohsen; Vanderborght, Jan; Javaux, Mathieu
2017-12-01
In 1978, Landsberg and Fowkes presented a solution of the water flow equation inside a root with uniform hydraulic properties. These properties are root radial conductivity and axial conductance, which control, respectively, the radial water flow between the root surface and xylem and the axial flow within the xylem. From the solution for the xylem water potential, functions that describe the radial and axial flow along the root axis were derived. These solutions can also be used to derive root macroscopic parameters that are potential input parameters of hydrological and crop models. In this paper, novel analytical solutions of the water flow equation are developed for roots whose hydraulic properties vary along their axis, which is the case for most plants. We derived solutions for single roots with linear or exponential variations of hydraulic properties with distance to root tip. These solutions were subsequently combined to construct single roots with complex hydraulic property profiles. The analytical solutions allow one to verify numerical solutions and to get a generalization of the hydric behaviour with the main influencing parameters of the solutions. The resulting flow distributions in heterogeneous roots differed from those in uniform roots and simulations led to more regular, less abrupt variations of xylem suction or radial flux along root axes. The model could successfully be applied to maize effective root conductance measurements to derive radial and axial hydraulic properties. We also show that very contrasted root water uptake patterns arise when using either uniform or heterogeneous root hydraulic properties in a soil-root model. The optimal root radius that maximizes water uptake under a carbon cost constraint was also studied. The optimal radius was shown to be highly dependent on the root hydraulic properties and close to observed properties in maize roots. We finally used the obtained functions for evaluating the impact of root maturation versus root growth on water uptake. Very diverse uptake strategies arise from the analysis. These solutions open new avenues to investigate for optimal genotype-environment-management interactions by optimization, for example, of plant-scale macroscopic hydraulic parameters used in ecohydrogolocial models.
On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation
NASA Astrophysics Data System (ADS)
Ouwerkerk, Ronald; Bottomley, Paul A.
2001-02-01
Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where metabolite concentrations change, accurate saturation corrections are possible without much loss in SNR.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
A heuristic method for consumable resource allocation in multi-class dynamic PERT networks
NASA Astrophysics Data System (ADS)
Yaghoubi, Saeed; Noori, Siamak; Mazdeh, Mohammad Mahdavi
2013-06-01
This investigation presents a heuristic method for consumable resource allocation problem in multi-class dynamic Project Evaluation and Review Technique (PERT) networks, where new projects from different classes (types) arrive to system according to independent Poisson processes with different arrival rates. Each activity of any project is operated at a devoted service station located in a node of the network with exponential distribution according to its class. Indeed, each project arrives to the first service station and continues its routing according to precedence network of its class. Such system can be represented as a queuing network, while the discipline of queues is first come, first served. On the basis of presented method, a multi-class system is decomposed into several single-class dynamic PERT networks, whereas each class is considered separately as a minisystem. In modeling of single-class dynamic PERT network, we use Markov process and a multi-objective model investigated by Azaron and Tavakkoli-Moghaddam in 2007. Then, after obtaining the resources allocated to service stations in every minisystem, the final resources allocated to activities are calculated by the proposed method.
Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.
Major, G
1993-07-01
Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.
A Single-Phase Analytic Equation of State for Solid Polyurea and Polyurea Aerogels
NASA Astrophysics Data System (ADS)
Whitworth, Nicholas; Lambourn, Brian
2017-06-01
Commercially available polymers are commonly used as impactors in high explosive gas-gun experiments. This paper presents a relatively simple, single-phase, analytic equation of state (EoS) for solid polyurea and polyurea aerogels suitable for use in hydrocode simulations. An exponential shock velocity-particle velocity relation is initially fit to available Hugoniot data on the solid material, which has a density of 1.13 g/cm3. This relation is then converted to a finite strain relation along the principal isentrope, which is used as the reference curve for a Mie-Gruneisen form of EoS with an assumed form for the variation of Gruneisen Γ with specific volume. Using the solid EoS in conjunction with the Snowplough model for porosity, experimental data on the shock response of solid polyurea and polyurea aerogels with initial densities of 0.20 and 0.35 g/cm3 can be reproduced to a reasonable degree of accuracy. A companion paper at this conference describes the application of this and other EoS in modelling shock-release-reshock gas-gun experiments on the insensitive high explosive PBX 9502.
Sun, Jiashu; Stowers, Chris C.; Boczko, Erik M.
2012-01-01
We report on measurements of the volume growth rate of ten individual budding yeast cells using a recently developed MOSFET-based microfluidic Coulter counter. The MOSFET-based microfluidic Coulter counter is very sensitive, provides signals that are immune from the baseline drift, and can work with cell culture media of complex composition. These desirable features allow us to directly measure the volume growth rate of single cells of Saccharomyces cerevisiae LYH3865 strain budding yeast in YNB culture media over a whole cell cycle. Results indicate that all budding yeast follow a sigmoid volume growth profile with reduced growth rates at the initial stage before the bud emerges and the final stage after the daughter gets mature. Analysis of the data indicates that even though all piecewise linear, Gomperitz, and Hill’s function models can fit the global growth profile equally well, the data strongly support local exponential growth phenomenon. Accurate volume growth measurements are important for applications in systems biology where quantitative parameters are required for modeling and simulation. PMID:20717618
Sun, Jiashu; Stowers, Chris C; Boczko, Erik M; Li, Deyu
2010-11-07
We report on measurements of the volume growth rate of ten individual budding yeast cells using a recently developed MOSFET-based microfluidic Coulter counter. The MOSFET-based microfluidic Coulter counter is very sensitive, provides signals that are immune from the baseline drift, and can work with cell culture media of complex composition. These desirable features allow us to directly measure the volume growth rate of single cells of Saccharomyces cerevisiae LYH3865 strain budding yeast in YNB culture media over a whole cell cycle. Results indicate that all budding yeast follow a sigmoid volume growth profile with reduced growth rates at the initial stage before the bud emerges and the final stage after the daughter gets mature. Analysis of the data indicates that even though all piecewise linear, Gomperitz, and Hill's function models can fit the global growth profile equally well, the data strongly support local exponential growth phenomenon. Accurate volume growth measurements are important for applications in systems biology where quantitative parameters are required for modeling and simulation.
Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warshaw, S I
2001-07-15
In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less
Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A
2001-09-01
We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.
Deterministic Models of Inhalational Anthrax in New Zealand White Rabbits
2014-01-01
Computational models describing bacterial kinetics were developed for inhalational anthrax in New Zealand white (NZW) rabbits following inhalation of Ames strain B. anthracis. The data used to parameterize the models included bacterial numbers in the airways, lung tissue, draining lymph nodes, and blood. Initial bacterial numbers were deposited spore dose. The first model was a single exponential ordinary differential equation (ODE) with 3 rate parameters that described mucociliated (physical) clearance, immune clearance (bacterial killing), and bacterial growth. At 36 hours postexposure, the ODE model predicted 1.7×107 bacteria in the rabbit, which agreed well with data from actual experiments (4.0×107 bacteria at 36 hours). Next, building on the single ODE model, a physiological-based biokinetic (PBBK) compartmentalized model was developed in which 1 physiological compartment was the lumen of the airways and the other was the rabbit body (lung tissue, lymph nodes, blood). The 2 compartments were connected with a parameter describing transport of bacteria from the airways into the body. The PBBK model predicted 4.9×107 bacteria in the body at 36 hours, and by 45 hours the model showed all clearance mechanisms were saturated, suggesting the rabbit would quickly succumb to the infection. As with the ODE model, the PBBK model results agreed well with laboratory observations. These data are discussed along with the need for and potential application of the models in risk assessment, drug development, and as a general aid to the experimentalist studying inhalational anthrax. PMID:24527843
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
Engineering of an ultra-thin molecular superconductor by charge transfer
Hla, Saw Wai; Hassanien, Abdelrahim; Kendal, Clark
2016-06-07
A method of forming a superconductive device of a single layer of (BETS).sub.2GaCl.sub.4 molecules on a substrate surface which displays a superconducting gap that increases exponentially with the length of the molecular chain is provided.
Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce
2010-01-01
Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273
Seamount statistics in the Pacific Ocean
NASA Astrophysics Data System (ADS)
Smith, Deborah K.; Jordan, Thomas H.
1988-04-01
We apply the wide-beam sampling technique of Jordan et al. (1983) to approximately 157,000 km of wide-beam profiles to obtain seamount population statistics for eight regions in the eastern and southern Pacific Ocean. Population statistics derived from wide-beam echograms are compared with seamount counts from Sea Beam swaths and with counts from bathymetric maps. We find that the average number of seamounts with summit heights h ≥ H is well-approximated by the exponential frequency-size distribution: ν(H)=νoe-βH. The exponential model for seamount sizes, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model, which has no intrinsic scale, in describing the average distribution of Pacific seamounts, and it appears to be valid over a size spectrum spanning 5 orders of magnitude in abundance. Large-scale regional variations in seamount populations are documented. We observe significant differences in seamount densities across the Murray fracture zone in the North Pacific and the Eltanin fracture zone system in the South Pacific. The Eltanin discontinuity is equally evident on both sides of the Pacific-Antarctic ridge. In the South Pacific, regions symmetrically disposed about the ridge axis have very similar seamount densities, despite the large difference between Pacific plate and Antarctic plate absolute velocities; evidently, any differences in the shear flows at the base of the Pacific and Antarctic plates do not affect seamount emplacement. Systematic variations in νo and β are observed as a function of lithospheric age, with the number of large seamounts increasing more rapidly than small seamounts. These observations have been used to develop a simple model for seamount production under the assumptions that (1) an exponential size-frequency distribution is maintained, (2) production is steady state, and (3) most small seamounts are formed on or near the ridge axis. The limited data available from this study appear to be consistent with the model, but they are insufficient to provide a rigorous test of the assumptions or determine accurately the model parameters. However, the data from the South Pacific indicate that the off-axis production of large seamounts probably accounts for the majority of seamounts with summit heights greater than 1000 m.
Connock, Martin; Hyde, Chris; Moore, David
2011-10-01
The UK National Institute for Health and Clinical Excellence (NICE) has used its Single Technology Appraisal (STA) programme to assess several drugs for cancer. Typically, the evidence submitted by the manufacturer comes from one short-term randomized controlled trial (RCT) demonstrating improvement in overall survival and/or in delay of disease progression, and these are the pre-eminent drivers of cost effectiveness. We draw attention to key issues encountered in assessing the quality and rigour of the manufacturers' modelling of overall survival and disease progression. Our examples are two recent STAs: sorafenib (Nexavar®) for advanced hepatocellular carcinoma, and azacitidine (Vidaza®) for higher-risk myelodysplastic syndromes (MDS). The choice of parametric model had a large effect on the predicted treatment-dependent survival gain. Logarithmic models (log-Normal and log-logistic) delivered double the survival advantage that was derived from Weibull models. Both submissions selected the logarithmic fits for their base-case economic analyses and justified selection solely on Akaike Information Criterion (AIC) scores. AIC scores in the azacitidine submission failed to match the choice of the log-logistic over Weibull or exponential models, and the modelled survival in the intervention arm lacked face validity. AIC scores for sorafenib models favoured log-Normal fits; however, since there is no statistical method for comparing AIC scores, and differences may be trivial, it is generally advised that the plausibility of competing models should be tested against external data and explored in diagnostic plots. Function fitting to observed data should not be a mechanical process validated by a single crude indicator (AIC). Projective models should show clear plausibility for the patients concerned and should be consistent with other published information. Multiple rather than single parametric functions should be explored and tested with diagnostic plots. When trials have survival curves with long tails exhibiting few events then the robustness of extrapolations using information in such tails should be tested.
Exponential inflation with F (R ) gravity
NASA Astrophysics Data System (ADS)
Oikonomou, V. K.
2018-03-01
In this paper, we shall consider an exponential inflationary model in the context of vacuum F (R ) gravity. By using well-known reconstruction techniques, we shall investigate which F (R ) gravity can realize the exponential inflation scenario at leading order in terms of the scalar curvature, and we shall calculate the slow-roll indices and the corresponding observational indices, in the context of slow-roll inflation. We also provide some general formulas of the slow-roll and the corresponding observational indices in terms of the e -foldings number. In addition, for the calculation of the slow-roll and of the observational indices, we shall consider quite general formulas, for which it is not necessary for the assumption that all the slow-roll indices are much smaller than unity to hold true. Finally, we investigate the phenomenological viability of the model by comparing it with the latest Planck and BICEP2/Keck-Array observational data. As we demonstrate, the model is compatible with the current observational data for a wide range of the free parameters of the model.
NASA Astrophysics Data System (ADS)
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Hypersurface Homogeneous Cosmological Model in Modified Theory of Gravitation
NASA Astrophysics Data System (ADS)
Katore, S. D.; Hatkar, S. P.; Baxi, R. J.
2016-12-01
We study a hypersurface homogeneous space-time in the framework of the f (R, T) theory of gravitation in the presence of a perfect fluid. Exact solutions of field equations are obtained for exponential and power law volumetric expansions. We also solve the field equations by assuming the proportionality relation between the shear scalar (σ ) and the expansion scalar (θ ). It is observed that in the exponential model, the universe approaches isotropy at large time (late universe). The investigated model is notably accelerating and expanding. The physical and geometrical properties of the investigated model are also discussed.
Self-sustaining charging of identical colliding particles
NASA Astrophysics Data System (ADS)
Siu, Theo; Cotton, Jake; Mattson, Gregory; Shinbrot, Troy
2014-05-01
Recent experiments have demonstrated that identical material samples can charge one another after being brought into symmetric contact. The mechanism for this charging is not known. In this article, we use a simplified one-dimensional lattice model to analyze charging in the context of agitated particles. We find that the electric field from a single weakly polarized grain can feed back on itself by polarizing its neighbors, leading to an exponential growth in polarization. We show that, by incorporating partial neutralization between neighboring polarized particles, either uniform alignment of dipoles or complex charge and polarization waves can be produced. We reproduce a polarized state experimentally using identical colliding particles and raise several issues for future study.
Performance and state-space analyses of systems using Petri nets
NASA Technical Reports Server (NTRS)
Watson, James Francis, III
1992-01-01
The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.
Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei
2012-06-01
To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.
Deng, Nan-jie; Dai, Wei
2013-01-01
Understanding how kinetics in the unfolded state affects protein folding is a fundamentally important yet less well-understood issue. Here we employ three different models to analyze the unfolded landscape and folding kinetics of the miniprotein Trp-cage. The first is a 208 μs explicit solvent molecular dynamics (MD) simulation from D. E. Shaw Research containing tens of folding events. The second is a Markov state model (MSM-MD) constructed from the same ultra-long MD simulation; MSM-MD can be used to generate thousands of folding events. The third is a Markov state model built from temperature replica exchange MD simulations in implicit solvent (MSM-REMD). All the models exhibit multiple folding pathways, and there is a good correspondence between the folding pathways from direct MD and those computed from the MSMs. The unfolded populations interconvert rapidly between extended and collapsed conformations on time scales ≤ 40 ns, compared with the folding time of ≈ 5 μs. The folding rates are independent of where the folding is initiated from within the unfolded ensemble. About 90 % of the unfolded states are sampled within the first 40 μs of the ultra-long MD trajectory, which on average explores ~27 % of the unfolded state ensemble between consecutive folding events. We clustered the folding pathways according to structural similarity into “tubes”, and kinetically partitioned the unfolded state into populations that fold along different tubes. From our analysis of the simulations and a simple kinetic model, we find that when the mixing within the unfolded state is comparable to or faster than folding, the folding waiting times for all the folding tubes are similar and the folding kinetics is essentially single exponential despite the presence of heterogeneous folding paths with non-uniform barriers. When the mixing is much slower than folding, different unfolded populations fold independently leading to non-exponential kinetics. A kinetic partition of the Trp-cage unfolded state is constructed which reveals that different unfolded populations have almost the same probability to fold along any of the multiple folding paths. We are investigating whether the results for the kinetics in the unfolded state of the twenty-residue Trp-cage is representative of larger single domain proteins. PMID:23705683
Obstructive sleep apnea alters sleep stage transition dynamics.
Bianchi, Matt T; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert
2010-06-28
Enhanced characterization of sleep architecture, compared with routine polysomnographic metrics such as stage percentages and sleep efficiency, may improve the predictive phenotyping of fragmented sleep. One approach involves using stage transition analysis to characterize sleep continuity. We analyzed hypnograms from Sleep Heart Health Study (SHHS) participants using the following stage designations: wake after sleep onset (WASO), non-rapid eye movement (NREM) sleep, and REM sleep. We show that individual patient hypnograms contain insufficient number of bouts to adequately describe the transition kinetics, necessitating pooling of data. We compared a control group of individuals free of medications, obstructive sleep apnea (OSA), medical co-morbidities, or sleepiness (n = 374) with mild (n = 496) or severe OSA (n = 338). WASO, REM sleep, and NREM sleep bout durations exhibited multi-exponential temporal dynamics. The presence of OSA accelerated the "decay" rate of NREM and REM sleep bouts, resulting in instability manifesting as shorter bouts and increased number of stage transitions. For WASO bouts, previously attributed to a power law process, a multi-exponential decay described the data well. Simulations demonstrated that a multi-exponential process can mimic a power law distribution. OSA alters sleep architecture dynamics by decreasing the temporal stability of NREM and REM sleep bouts. Multi-exponential fitting is superior to routine mono-exponential fitting, and may thus provide improved predictive metrics of sleep continuity. However, because a single night of sleep contains insufficient transitions to characterize these dynamics, extended monitoring of sleep, probably at home, would be necessary for individualized clinical application.
NASA Astrophysics Data System (ADS)
Coutu, S.; Rota, C.; Rossi, L.; Barry, D. A.
2011-12-01
Facades are protected by paints that contain biocides as protection against degradation. These biocides are leached by rainfall (albeit at low concentrations). At the city scale, however, the surface area of building facades is significant, and leached biocides are a potential environmental risk to receiving waters. A city-scale biocide-leaching model was developed based on two main steps. In the first step, laboratory experiments on a single facade were used to calibrate and validate a 1D, two-region phenomenological model of biocide leaching. The same data set was analyzed independently by another research group who found empirically that biocide leachate breakthrough curves were well represented by a sum of two exponentials. Interestingly, the two-region model was found analytically to reproduce this functional form as a special case. The second step in the method is site-specific, and involves upscaling the validated single facade model to a particular city. In this step, (i) GIS-based estimates of facade heights and areas are deduced using the city's cadastral data, (ii) facade flow is estimated using local meteorological data (rainfall, wind direction) and (iii) paint application rates are modeled as a stochastic process based on manufacturers' recommendations. The methodology was applied to Lausanne, Switzerland, a city of about 200,000 inhabitants. Approximately 30% of the annually applied mass of biocides was estimated to be released to the environment.
Boutilier, Michael S H; Sun, Chengzhen; O'Hern, Sean C; Au, Harold; Hadjiconstantinou, Nicolas G; Karnik, Rohit
2014-01-28
Gas transport through intrinsic defects and tears is a critical yet poorly understood phenomenon in graphene membranes for gas separation. We report that independent stacking of graphene layers on a porous support exponentially decreases flow through defects. On the basis of experimental results, we develop a gas transport model that elucidates the separate contributions of tears and intrinsic defects on gas leakage through these membranes. The model shows that the pore size of the porous support and its permeance critically affect the separation behavior, and reveals the parameter space where gas separation can be achieved regardless of the presence of nonselective defects, even for single-layer membranes. The results provide a framework for understanding gas transport in graphene membranes and guide the design of practical, selectively permeable graphene membranes for gas separation.
Polynomial solution of quantum Grassmann matrices
NASA Astrophysics Data System (ADS)
Tierz, Miguel
2017-05-01
We study a model of quantum mechanical fermions with matrix-like index structure (with indices N and L) and quartic interactions, recently introduced by Anninos and Silva. We compute the partition function exactly with q-deformed orthogonal polynomials (Stieltjes-Wigert polynomials), for different values of L and arbitrary N. From the explicit evaluation of the thermal partition function, the energy levels and degeneracies are determined. For a given L, the number of states of different energy is quadratic in N, which implies an exponential degeneracy of the energy levels. We also show that at high-temperature we have a Gaussian matrix model, which implies a symmetry that swaps N and L, together with a Wick rotation of the spectral parameter. In this limit, we also write the partition function, for generic L and N, in terms of a single generalized Hermite polynomial.
{phi} meson production in Au + Au and p + p collisions at {radical}s{sub NN}=200 GeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, J.; Adler, C.; Aggarwal, M.M.
2004-06-01
We report the STAR measurement of {psi} meson production in Au + Au and p + p collisions at {radical}s{sub NN} = 200 GeV. Using the event mixing technique, the {psi} spectra and yields are obtained at midrapidity for five centrality bins in Au+Au collisions and for non-singly-diffractive p+p collisions. It is found that the {psi} transverse momentum distributions from Au+Au collisions are better fitted with a single-exponential while the p+p spectrum is better described by a double-exponential distribution. The measured nuclear modification factors indicate that {psi} production in central Au+Au collisions is suppressed relative to peripheral collisions when scaledmore » by the number of binary collisions ( versus centrality and the constant {psi}/K{sup -} ratio versus beam species, centrality, and collision energy rule out kaon coalescence as the dominant mechanism for {psi} production.« less
Magnetic pattern at supergranulation scale: the void size distribution
NASA Astrophysics Data System (ADS)
Berrilli, F.; Scardigli, S.; Del Moro, D.
2014-08-01
The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.
Quantifying short-lived events in multistate ionic current measurements.
Balijepalli, Arvind; Ettedgui, Jessica; Cornio, Andrew T; Robertson, Joseph W F; Cheung, Kin P; Kasianowicz, John J; Vaz, Canute
2014-02-25
We developed a generalized technique to characterize polymer-nanopore interactions via single channel ionic current measurements. Physical interactions between analytes, such as DNA, proteins, or synthetic polymers, and a nanopore cause multiple discrete states in the current. We modeled the transitions of the current to individual states with an equivalent electrical circuit, which allowed us to describe the system response. This enabled the estimation of short-lived states that are presently not characterized by existing analysis techniques. Our approach considerably improves the range and resolution of single-molecule characterization with nanopores. For example, we characterized the residence times of synthetic polymers that are three times shorter than those estimated with existing algorithms. Because the molecule's residence time follows an exponential distribution, we recover nearly 20-fold more events per unit time that can be used for analysis. Furthermore, the measurement range was extended from 11 monomers to as few as 8. Finally, we applied this technique to recover a known sequence of single-stranded DNA from previously published ion channel recordings, identifying discrete current states with subpicoampere resolution.
Observational constraints on varying neutrino-mass cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.
We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.
Muñoz-Cuevas, Marina; Fernández, Pablo S; George, Susan; Pin, Carmen
2010-05-01
The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (a(w)) values. To model the duration of the lag phase, the dependence of the parameter h(0), which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or a(w) were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scalar field and time varying cosmological constant in f(R,T) gravity for Bianchi type-I universe
NASA Astrophysics Data System (ADS)
Singh, G. P.; Bishi, Binaya K.; Sahoo, P. K.
2016-04-01
In this article, we have analysed the behaviour of scalar field and cosmological constant in $f(R,T)$ theory of gravity. Here, we have considered the simplest form of $f(R,T)$ i.e. $f(R,T)=R+2f(T)$, where $R$ is the Ricci scalar and $T$ is the trace of the energy momentum tensor and explored the spatially homogeneous and anisotropic Locally Rotationally Symmetric (LRS) Bianchi type-I cosmological model. It is assumed that the Universe is filled with two non-interacting matter sources namely scalar field (normal or phantom) with scalar potential and matter contribution due to $f(R,T)$ action. We have discussed two cosmological models according to power law and exponential law of the volume expansion along with constant and exponential scalar potential as sub models. Power law models are compatible with normal (quintessence) and phantom scalar field whereas exponential volume expansion models are compatible with only normal (quintessence) scalar field. The values of cosmological constant in our models are in agreement with the observational results. Finally, we have discussed some physical and kinematical properties of both the models.
Desmet, Gert
2013-11-01
The finite length parallel zone (FPZ)-model is proposed as an alternative model for the axial- or eddy-dispersion caused by the occurrence of local velocity biases or flow heterogeneities in porous media such as those used in liquid chromatography columns. The mathematical plate height expression evolving from the model shows that the A- and C-term band broadening effects that can originate from a given velocity bias should be coupled in an exponentially decaying way instead of harmonically as proposed in Giddings' coupling theory. In the low and high velocity limit both models converge, while a 12% difference can be observed in the (practically most relevant) intermediate range of reduced velocities. Explicit expressions for the A- and C-constants appearing in the exponential decay-based plate height expression have been derived for each of the different possible velocity bias levels (single through-pore and particle level, multi-particle level and trans-column level). These expressions allow to directly relate the band broadening originating from these different levels to the local fundamental transport parameters, hence offering the possibility to include a velocity-dependent and, if, needed retention factor-dependent transversal dispersion coefficient. Having developed the mathematics for the general case wherein a difference in retention equilibrium establishes between the two parallel zones, the effect of any possible local variations in packing density and/or retention capacity on the eddy-dispersion can be explicitly accounted for as well. It is furthermore also shown that, whereas the lumped transport parameter model used in the basic variant of the FPZ-model only provides a first approximation of the true decay constant, the model can be extended by introducing a constant correction factor to correctly account for the continuous transversal dispersion transport in the velocity bias zones. Copyright © 2013 Elsevier B.V. All rights reserved.
Real-time soil sensing based on fiber optics and spectroscopy
NASA Astrophysics Data System (ADS)
Li, Minzan
2005-08-01
Using NIR spectroscopic techniques, correlation analysis and regression analysis for soil parameter estimation was conducted with raw soil samples collected in a cornfield and a forage field. Soil parameters analyzed were soil moisture, soil organic matter, nitrate nitrogen, soil electrical conductivity and pH. Results showed that all soil parameters could be evaluated by NIR spectral reflectance. For soil moisture, a linear regression model was available at low moisture contents below 30 % db, while an exponential model can be used in a wide range of moisture content up to 100 % db. Nitrate nitrogen estimation required a multi-spectral exponential model and electrical conductivity could be evaluated by a single spectral regression. According to the result above mentioned, a real time soil sensor system based on fiber optics and spectroscopy was developed. The sensor system was composed of a soil subsoiler with four optical fiber probes, a spectrometer, and a control unit. Two optical fiber probes were used for illumination and the other two optical fiber probes for collecting soil reflectance from visible to NIR wavebands at depths around 30 cm. The spectrometer was used to obtain the spectra of reflected lights. The control unit consisted of a data logging device, a personal computer, and a pulse generator. The experiment showed that clear photo-spectral reflectance was obtained from the underground soil. The soil reflectance was equal to that obtained by the desktop spectrophotometer in laboratory tests. Using the spectral reflectance, the soil parameters, such as soil moisture, pH, EC and SOM, were evaluated.
The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.
Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels
2013-07-21
Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.
2012-11-01
We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.
Looking for Connections between Linear and Exponential Functions
ERIC Educational Resources Information Center
Lo, Jane-Jane; Kratky, James L.
2012-01-01
Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…
A Parametric Model for Barred Equilibrium Beach Profiles
2014-05-10
to shallow water. Bodge (1992) and Komar and McDougal (1994) suggested an exponential form as a preferred solution that exhibited finite slope at the...applications. J. Coast. Res. 7, 53–84. Komar, P.D., McDougal ,W.G., 1994. The analysis of beach profiles and nearshore processes using the exponential beach
Local perturbations perturb—exponentially-locally
NASA Astrophysics Data System (ADS)
De Roeck, W.; Schütz, M.
2015-06-01
We elaborate on the principle that for gapped quantum spin systems with local interaction, "local perturbations [in the Hamiltonian] perturb locally [the groundstate]." This principle was established by Bachmann et al. [Commun. Math. Phys. 309, 835-871 (2012)], relying on the "spectral flow technique" or "quasi-adiabatic continuation" [M. B. Hastings, Phys. Rev. B 69, 104431 (2004)] to obtain locality estimates with sub-exponential decay in the distance to the spatial support of the perturbation. We use ideas of Hamza et al. [J. Math. Phys. 50, 095213 (2009)] to obtain similarly a transformation between gapped eigenvectors and their perturbations that is local with exponential decay. This allows to improve locality bounds on the effect of perturbations on the low lying states in certain gapped models with a unique "bulk ground state" or "topological quantum order." We also give some estimate on the exponential decay of correlations in models with impurities where some relevant correlations decay faster than one would naively infer from the global gap of the system, as one also expects in disordered systems with a localized groundstate.
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
A queueing theory description of fat-tailed price returns in imperfect financial markets
NASA Astrophysics Data System (ADS)
Lamba, H.
2010-09-01
In a financial market, for agents with long investment horizons or at times of severe market stress, it is often changes in the asset price that act as the trigger for transactions or shifts in investment position. This suggests the use of price thresholds to simulate agent behavior over much longer timescales than are currently used in models of order-books. We show that many phenomena, routinely ignored in efficient market theory, can be systematically introduced into an otherwise efficient market, resulting in models that robustly replicate the most important stylized facts. We then demonstrate a close link between such threshold models and queueing theory, with large price changes corresponding to the busy periods of a single-server queue. The distribution of the busy periods is known to have excess kurtosis and non-exponential decay under various assumptions on the queue parameters. Such an approach may prove useful in the development of mathematical models for rapid deleveraging and panics in financial markets, and the stress-testing of financial institutions.
Ohashi, Hidenori; Tamaki, Takanori; Yamaguchi, Takeo
2011-12-29
Molecular collisions, which are the microscopic origin of molecular diffusive motion, are affected by both the molecular surface area and the distance between molecules. Their product can be regarded as the free space around a penetrant molecule defined as the "shell-like free volume" and can be taken as a characteristic of molecular collisions. On the basis of this notion, a new diffusion theory has been developed. The model can predict molecular diffusivity in polymeric systems using only well-defined single-component parameters of molecular volume, molecular surface area, free volume, and pre-exponential factors. By consideration of the physical description of the model, the actual body moved and which neighbor molecules are collided with are the volume and the surface area of the penetrant molecular core. In the present study, a semiempirical quantum chemical calculation was used to calculate both of these parameters. The model and the newly developed parameters offer fairly good predictive ability. © 2011 American Chemical Society
Cyclic Plasticity Constitutive Model for Uniaxial Ratcheting Behavior of AZ31B Magnesium Alloy
NASA Astrophysics Data System (ADS)
Lin, Y. C.; Liu, Zheng-Hua; Chen, Xiao-Min; Long, Zhi-Li
2015-05-01
Investigating the ratcheting behavior of magnesium alloys is significant for the structure's reliable design. The uniaxial ratcheting behavior of AZ31B magnesium alloy is studied by the asymmetric cyclic stress-controlled experiments at room temperature. A modified kinematic hardening model is established to describe the uniaxial ratcheting behavior of the studied alloy. In the modified model, the material parameter m i is improved as an exponential function of the maximum equivalent stress. The modified model can be used to predict the ratcheting strain evolution of the studied alloy under the single-step and multi-step asymmetric stress-controlled cyclic loadings. Additionally, due to the significant effect of twinning on the plastic deformation of magnesium alloy, the relationship between the material parameter m i and the linear density of twins is discussed. It is found that there is a linear relationship between the material parameter m i and the linear density of twins induced by the cyclic loadings.
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
SIR-B ocean-wave enhancement with fast Fourier transform techniques
NASA Technical Reports Server (NTRS)
Tilley, David G.
1987-01-01
Shuttle Imaging Radar (SIR-B) imagery is Fourier filtered to remove the estimated system-transfer function, reduce speckle noise, and produce ocean scenes with a gray scale that is proportional to wave height. The SIR-B system response to speckled scenes of uniform surfaces yields an estimate of the stationary wavenumber response of the imaging radar, modeled by the 15 even terms of an eighth-order two-dimensional polynomial. Speckle can also be used to estimate the dynamic wavenumber response of the system due to surface motion during the aperture synthesis period, modeled with a single adaptive parameter describing an exponential correlation along track. A Fourier filter can then be devised to correct for the wavenumber response of the remote sensor and scene correlation, with subsequent subtraction of an estimate of the speckle noise component. A linearized velocity bunching model, combined with a surface tilt and hydrodynamic model, is incorporated in the Fourier filter to derive estimates of wave height from the radar intensities corresponding to individual picture elements.
General Mechanism of Two-State Protein Folding Kinetics
Rollins, Geoffrey C.; Dill, Ken A.
2016-01-01
We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s. PMID:25056406
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Enhanced Response Time of Electrowetting Lenses with Shaped Input Voltage Functions.
Supekar, Omkar D; Zohrabi, Mo; Gopinath, Juliet T; Bright, Victor M
2017-05-16
Adaptive optical lenses based on the electrowetting principle are being rapidly implemented in many applications, such as microscopy, remote sensing, displays, and optical communication. To characterize the response of these electrowetting lenses, the dependence upon direct current (DC) driving voltage functions was investigated in a low-viscosity liquid system. Cylindrical lenses with inner diameters of 2.45 and 3.95 mm were used to characterize the dynamic behavior of the liquids under DC voltage electrowetting actuation. With the increase of the rise time of the input exponential driving voltage, the originally underdamped system response can be damped, enabling a smooth response from the lens. We experimentally determined the optimal rise times for the fastest response from the lenses. We have also performed numerical simulations of the lens actuation with input exponential driving voltage to understand the variation in the dynamics of the liquid-liquid interface with various input rise times. We further enhanced the response time of the devices by shaping the input voltage function with multiple exponential rise times. For the 3.95 mm inner diameter lens, we achieved a response time improvement of 29% when compared to the fastest response obtained using single-exponential driving voltage. The technique shows great promise for applications that require fast response times.
Exponential gain of randomness certified by quantum contextuality
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan
2017-04-01
We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.
Production of a Biosurfactant from Torulopsis bombicola
Cooper, D. G.; Paddock, D. A.
1984-01-01
Two types of carbon sources—carbohydrate and vegetable oil—are necessary to obtain large yields of biosurfactant from Torulopsis bombicola ATCC 22214. Most of the surfactant is produced in the late exponential phase of growth. It is possible to grow the yeast on a single carbon source and then add the other type of substrate, after the exponential growth phase, and cause a burst of surfactant production. This product is a mixture of glycolipids. The maximum yield is 70 g liter−1, or 35% of the weight of the substrate used. An economic comparison demonstrated that this biosurfactant could be produced significantly more cheaply than any of the previously reported microbial surfactants. PMID:16346455
NASA Astrophysics Data System (ADS)
Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.
2017-12-01
Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Zhao, Kaihong
2018-12-01
In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.
ERIC Educational Resources Information Center
Casstevens, Thomas W.; And Others
This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
An exponential filter model predicts lightness illusions
Zeman, Astrid; Brooks, Kevin R.; Ghebreab, Sennay
2015-01-01
Lightness, or perceived reflectance of a surface, is influenced by surrounding context. This is demonstrated by the Simultaneous Contrast Illusion (SCI), where a gray patch is perceived lighter against a black background and vice versa. Conversely, assimilation is where the lightness of the target patch moves toward that of the bounding areas and can be demonstrated in White's effect. Blakeslee and McCourt (1999) introduced an oriented difference-of-Gaussian (ODOG) model that is able to account for both contrast and assimilation in a number of lightness illusions and that has been subsequently improved using localized normalization techniques. We introduce a model inspired by image statistics that is based on a family of exponential filters, with kernels spanning across multiple sizes and shapes. We include an optional second stage of normalization based on contrast gain control. Our model was tested on a well-known set of lightness illusions that have previously been used to evaluate ODOG and its variants, and model lightness values were compared with typical human data. We investigate whether predictive success depends on filters of a particular size or shape and whether pooling information across filters can improve performance. The best single filter correctly predicted the direction of lightness effects for 21 out of 27 illusions. Combining two filters together increased the best performance to 23, with asymptotic performance at 24 for an arbitrarily large combination of filter outputs. While normalization improved prediction magnitudes, it only slightly improved overall scores in direction predictions. The prediction performance of 24 out of 27 illusions equals that of the best performing ODOG variant, with greater parsimony. Our model shows that V1-style orientation-selectivity is not necessary to account for lightness illusions and that a low-level model based on image statistics is able to account for a wide range of both contrast and assimilation effects. PMID:26157381
Exponential Stellar Disks in Low Surface Brightness Galaxies: A Critical Test of Viscous Evolution
NASA Astrophysics Data System (ADS)
Bell, Eric F.
2002-12-01
Viscous redistribution of mass in Milky Way-type galactic disks is an appealing way of generating an exponential stellar profile over many scale lengths, almost independent of initial conditions, requiring only that the viscous timescale and star formation timescale are approximately equal. However, galaxies with solid-body rotation curves cannot undergo viscous evolution. Low surface brightness (LSB) galaxies have exponential surface brightness profiles, yet have slowly rising, nearly solid-body rotation curves. Because of this, viscous evolution may be inefficient in LSB galaxies: the exponential profiles, instead, would give important insight into initial conditions for galaxy disk formation. Using star formation laws from the literature and tuning the efficiency of viscous processes to reproduce an exponential stellar profile in Milky Way-type galaxies, I test the role of viscous evolution in LSB galaxies. Under the conservative and not unreasonable condition that LSB galaxies are gravitationally unstable for at least a part of their lives, I find that it is impossible to rule out a significant role for viscous evolution. This type of model still offers an attractive way of producing exponential disks, even in LSB galaxies with slowly rising rotation curves.
Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field
NASA Astrophysics Data System (ADS)
Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi
2018-02-01
We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.
Observational constraints on tachyonic chameleon dark energy model
NASA Astrophysics Data System (ADS)
Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.
2018-03-01
It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.
Cosmological models with a hybrid scale factor in an extended gravity theory
NASA Astrophysics Data System (ADS)
Mishra, B.; Tripathy, S. K.; Tarai, Sankarsan
2018-03-01
A general formalism to investigate Bianchi type V Ih universes is developed in an extended theory of gravity. A minimally coupled geometry and matter field is considered with a rescaled function of f(R,T) substituted in place of the Ricci scalar R in the geometrical action. Dynamical aspects of the models are discussed by using a hybrid scale factor (HSF) that behaves as power law in an initial epoch and as an exponential form at late epoch. The power law behavior and the exponential behavior appear as two extreme cases of the present model.
Carbon turnover in an agricultural sub-soil
NASA Astrophysics Data System (ADS)
Collins, Chris
2010-05-01
Maize was added to a grassland subsoil (10 - 50 cm) and the fate of the carbon from the plant material followed for 520 days with nine sampling points over an exponential time series. The carbon and delta 13C signature in five soil fractions: POM (particulate organic matter), fine sand, coarse silt, fine silt and clay were monitored. Over the course of the experiment there was a 57% decline in the total C of the soil principally from the particulate organic matter which contained the added maize equivalent to a half life of 533 days. A single exponential was the best fit to the data indicating that the slower turnover pools proposed in models such as Roth C were not observed in the time course of this experiment. Carbon rapidly entered the fine sand and coarse silt fractions, it then passed into the clay fraction. The fine silt fraction was not significantly changed. The maize carbon showed a delay to this pattern, but there was accumulation of maize carbon in the fine sand and fine silt fractions. The largest increases in % carbon as a consequence of the introduction of the maize carbon were of the following order clay > fine sand > coarse silt >fine silt. The results suggest that all these fractions are actively being turnover in this soil and that carbon is most protected in the fine sand and silt fractions, not clay as has been observed by other workers. The results are also discussed in the wider contexts of representative pools for modeling.
Leider, Jonathon P; Castrucci, Brian C; Harris, Jenine K; Hearne, Shelley
2015-08-06
The relationship between policy networks and policy development among local health departments (LHDs) is a growing area of interest to public health practitioners and researchers alike. In this study, we examine policy activity and ties between public health leadership across large urban health departments. This study uses data from a national profile of local health departments as well as responses from a survey sent to three staff members (local health official, chief of policy, chief science officer) in each of 16 urban health departments in the United States. Network questions related to frequency of contact with health department personnel in other cities. Using exponential random graph models, network density and centrality were examined, as were patterns of communication among those working on several policy areas using exponential random graph models. All 16 LHDs were active in communicating about chronic disease as well as about use of alcohol, tobacco, and other drugs (ATOD). Connectedness was highest among local health officials (density = .55), and slightly lower for chief science officers (d = .33) and chiefs of policy (d = .29). After accounting for organizational characteristics, policy homophily (i.e., when two network members match on a single characteristic) and tenure were the most significant predictors of formation of network ties. Networking across health departments has the potential for accelerating the adoption of public health policies. This study suggests similar policy interests and formation of connections among senior leadership can potentially drive greater connectedness among other staff.
Leider, Jonathon P.; Castrucci, Brian C.; Harris, Jenine K.; Hearne, Shelley
2015-01-01
Background: The relationship between policy networks and policy development among local health departments (LHDs) is a growing area of interest to public health practitioners and researchers alike. In this study, we examine policy activity and ties between public health leadership across large urban health departments. Methods: This study uses data from a national profile of local health departments as well as responses from a survey sent to three staff members (local health official, chief of policy, chief science officer) in each of 16 urban health departments in the United States. Network questions related to frequency of contact with health department personnel in other cities. Using exponential random graph models, network density and centrality were examined, as were patterns of communication among those working on several policy areas using exponential random graph models. Results: All 16 LHDs were active in communicating about chronic disease as well as about use of alcohol, tobacco, and other drugs (ATOD). Connectedness was highest among local health officials (density = .55), and slightly lower for chief science officers (d = .33) and chiefs of policy (d = .29). After accounting for organizational characteristics, policy homophily (i.e., when two network members match on a single characteristic) and tenure were the most significant predictors of formation of network ties. Conclusion: Networking across health departments has the potential for accelerating the adoption of public health policies. This study suggests similar policy interests and formation of connections among senior leadership can potentially drive greater connectedness among other staff. PMID:26258784
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Locality of the Thomas-Fermi-von Weizsäcker Equations
NASA Astrophysics Data System (ADS)
Nazar, F. Q.; Ortner, C.
2017-06-01
We establish a pointwise stability estimate for the Thomas-Fermi-von Weiz-säcker (TFW) model, which demonstrates that a local perturbation of a nuclear arrangement results also in a local response in the electron density and electrostatic potential. The proof adapts the arguments for existence and uniqueness of solutions to the TFW equations in the thermodynamic limit by Catto et al. (The mathematical theory of thermodynamic limits: Thomas-Fermi type models. Oxford mathematical monographs. The Clarendon Press, Oxford University Press, New York, 1998). To demonstrate the utility of this combined locality and stability result we derive several consequences, including an exponential convergence rate for the thermodynamic limit, partition of total energy into exponentially localised site energies (and consequently, exponential locality of forces), and generalised and strengthened results on the charge neutrality of local defects.
NASA Astrophysics Data System (ADS)
Vijayakumar, M.; Mahesvaran, K.; Patel, Dinesh K.; Arunkumar, S.; Marimuthu, K.
2014-11-01
Dy3+ doped Aluminofluoroborophosphate glasses (BPAxD) have been prepared following conventional melt quenching technique and their structural and optical properties were explored through XRD, FTIR, optical absorption, excitation, emission and decay measurements. The coexistence of BO3 groups in borate rich domain and BO4 groups in phosphate rich domain have been confirmed through vibrational energy analysis. Negative bonding parameter (δ) values indicate that, the metal-ligand environment in the prepared glasses is of ionic in nature. The oscillator strength and the luminescent intensity Ωλ (λ = 2, 4 and 6) parameters are calculated using Judd-Ofelt theory. The radiative properties such as transition probability (A), stimulated emission cross-section (σpE) and branching ratios (β) have been calculated using JO intensity parameters and compared with the reported Dy3+ doped glasses. Concentration effect on Y/B intensity ratios and the CIE chromaticity coordinates were calculated for the generation of white light from the luminescence spectra. The color purity and the correlated color temperature were also calculated and the results are discussed in the present work. The decay of the 4F9/2 excited level is found to be single exponential for lower concentration and become non-exponential for higher concentration. The non-exponential behavior arises due to the efficient energy transfer between the Dy3+ ions through various non-radiative relaxation channels and the decay of the 4F9/2 excited level have been analyzed with IH model. Among the prepared glasses, BPA0.5D glass exhibits higher σpE, βR, σpE×σpE, σpE×Δλeff and η values for the 6H13/2 emission band which in turn specifies its suitability for white LEDs, laser applications and optical amplifiers.
White light generation in Dy3+-doped fluorosilicate glasses for W-LED applications
NASA Astrophysics Data System (ADS)
Krishnaiah, K. Venkata; Jayasankar, C. K.
2011-05-01
Dysprosium doped fluorosilicate (SNbKZLF:SiO2-Nb2O5-K2O-ZnF2-LiF) glasses have been prepared and studied through excitation, emission and decay rate analysis. Sharp emission peaks were observed at 485 nm (blue) and 577 nm (yellow) under 387 nm excitation, which are attributed to 4F9/2 --> 6H15/2 and 4F9/2 --> 6H13/2 transitions, respectively, of Dy3+ ions. The yellow-to-blue intensity ratio increases (0.85 to 1.19) with increase in Dy3+ ion concentration. The decay rates exhibit single exponential for lower concentrations and turns into non-exponential for higher concentrations. The non-exponential nature of the decay rates are well-fitted to the Inokuti-Hirayama model for S = 6, which indicates that the nature of the energy transfer between donor and acceptor ions is of dipole-dipole type. The lifetime for the 4F9/2 level of Dy3+ ion decreases (0.42 to 0.14 ms), whereas energy transfer parameter increases (0.11 to 0.99) with increase of Dy3+ ion concentration (0.05 to 4.0 mol %). The chromaticity coordinates have been calculated from the emission spectra and analyzed with Commission International de I'Eclairage diagram. The chromaticity coordinates appeared in the white light region for all concentrations of Dy3+ ions in the present glasses. The correlated color temperature value decreases from 5597 K (closer to the day light value of 5500 K) to 4524 K with increase of Dy2O3 ion concentration from 0.01 to 4.0 mol %. These results indicate that Dy3+:SNbKZLF glasses can be considered as a potential host material for the development of white light emitting diodes.
NASA Astrophysics Data System (ADS)
Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.
2016-12-01
Estimating a distance by inverting a parallax is only valid in the absence of noise. As most stars in the Gaia catalog will have non-negligible fractional parallax errors, we must treat distance estimation as a constrained inference problem. Here we investigate the performance of various priors for estimating distances, using a simulated Gaia catalog of one billion stars. We use three minimalist, isotropic priors, as well an anisotropic prior derived from the observability of stars in a Milky Way model. The two priors that assume a uniform distribution of stars—either in distance or in space density—give poor results: The root mean square fractional distance error, {f}{rms}, grows far in excess of 100% once the fractional parallax error, {f}{true}, is larger than 0.1. A prior assuming an exponentially decreasing space density with increasing distance performs well once its single parameter—the scale length— has been set to an appropriate value: {f}{rms} is roughly equal to {f}{true} for {f}{true}\\lt 0.4, yet does not increase further as {f}{true} increases up to to 1.0. The Milky Way prior performs well except toward the Galactic center, due to a mismatch with the (simulated) data. Such mismatches will be inevitable (and remain unknown) in real applications, and can produce large errors. We therefore suggest adopting the simpler exponentially decreasing space density prior, which is also less time-consuming to compute. Including Gaia photometry improves the distance estimation significantly for both the Milky Way and exponentially decreasing space density prior, yet doing so requires additional assumptions about the physical nature of stars.
A demographic study of the exponential distribution applied to uneven-aged forests
Jeffrey H. Gove
2016-01-01
A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...
Exponential Potential versus Dark Matter
1993-10-15
scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the
Modeling stochastic noise in gene regulatory systems
Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung
2014-01-01
The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Impact of nonzero boresight pointing error on ergodic capacity of MIMO FSO communication systems.
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen
2016-02-22
A thorough investigation of the impact of nonzero boresight pointing errors on the ergodic capacity of multiple-input/multiple-output (MIMO) free-space optical (FSO) systems with equal gain combining (EGC) reception under different turbulence models, which are modeled as statistically independent, but not necessarily identically distributed (i.n.i.d.) is addressed in this paper. Novel closed-form asymptotic expressions at high signal-to-noise ratio (SNR) for the ergodic capacity of MIMO FSO systems are derived when different geometric arrangements of the receive apertures at the receiver are considered in order to reduce the effect of nonzero inherent boresight displacement, which is inevitably present when more than one receive aperture is considered. As a result, the asymptotic ergodic capacity of MIMO FSO systems is evaluated over log-normal (LN), gamma-gamma (GG) and exponentiated Weibull (EW) atmospheric turbulence in order to study different turbulence conditions, different sizes of receive apertures as well as different aperture averaging conditions. It is concluded that the use of single-input/multiple-output (SIMO) and MIMO techniques can significantly increase the ergodic capacity respect to the direct path link when the inherent boresight displacement takes small values, i.e. when the spacing among receive apertures is not too big. The effect of nonzero additional boresight errors, which is due to the thermal expansion of the building, is evaluated in multiple-input/single-output (MISO) and single-input/single-output (SISO) FSO systems. Simulation results are further included to confirm the analytical results.
Human population and atmospheric carbon dioxide growth dynamics: Diagnostics for the future
NASA Astrophysics Data System (ADS)
Hüsler, A. D.; Sornette, D.
2014-10-01
We analyze the growth rates of human population and of atmospheric carbon dioxide by comparing the relative merits of two benchmark models, the exponential law and the finite-time-singular (FTS) power law. The later results from positive feedbacks, either direct or mediated by other dynamical variables, as shown in our presentation of a simple endogenous macroeconomic dynamical growth model describing the growth dynamics of coupled processes involving human population (labor in economic terms), capital and technology (proxies by CO2 emissions). Human population in the context of our energy intensive economies constitutes arguably the most important underlying driving variable of the content of carbon dioxide in the atmosphere. Using some of the best databases available, we perform empirical analyses confirming that the human population on Earth has been growing super-exponentially until the mid-1960s, followed by a decelerated sub-exponential growth, with a tendency to plateau at just an exponential growth in the last decade with an average growth rate of 1.0% per year. In contrast, we find that the content of carbon dioxide in the atmosphere has continued to accelerate super-exponentially until 1990, with a transition to a progressive deceleration since then, with an average growth rate of approximately 2% per year in the last decade. To go back to CO2 atmosphere contents equal to or smaller than the level of 1990 as has been the broadly advertised goals of international treaties since 1990 requires herculean changes: from a dynamical point of view, the approximately exponential growth must not only turn to negative acceleration but also negative velocity to reverse the trend.
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
High affinity ligands from in vitro selection: Complex targets
Morris, Kevin N.; Jensen, Kirk B.; Julin, Carol M.; Weil, Michael; Gold, Larry
1998-01-01
Human red blood cell membranes were used as a model system to determine if the systematic evolution of ligands by exponential enrichment (SELEX) methodology, an in vitro protocol for isolating high-affinity oligonucleotides that bind specifically to virtually any single protein, could be used with a complex mixture of potential targets. Ligands to multiple targets were generated simultaneously during the selection process, and the binding affinities of these ligands for their targets are comparable to those found in similar experiments against pure targets. A secondary selection scheme, deconvolution-SELEX, facilitates rapid isolation of the ligands to targets of special interest within the mixture. SELEX provides high-affinity compounds for multiple targets in a mixture and might allow a means for dissecting complex biological systems. PMID:9501188
Multiple relaxations of the cluster surface diffusion in a homoepitaxial SrTiO3 layer
NASA Astrophysics Data System (ADS)
Woo, Chang-Su; Chu, Kanghyun; Song, Jong-Hyun; Yang, Chan-Ho
2018-03-01
We examine the surface diffusion process of adatomic clusters on a (001)-oriented SrTiO3 single crystal using reflection high energy electron diffraction (RHEED). We find that the recovery curve of the RHEED intensity acquired after a homoepitaxial half-layer growth can be accurately fit into a double exponential function, indicating the existence of two dominant relaxation mechanisms. The characteristic relaxation times at selected growth temperatures are investigated to determine the diffusion activation barriers of 0.67 eV and 0.91 eV, respectively. The Monte Carlo simulation of the cluster hopping model suggests that the decrease in the number of dimeric and trimeric clusters during surface diffusion is the origin of the observed relaxation phenomena.
Glucose Sensing by Time-Resolved Fluorescence of Sol-Gel Immobilized Glucose Oxidase
Esposito, Rosario; Ventura, Bartolomeo Della; De Nicola, Sergio; Altucci, Carlo; Velotta, Raffaele; Mita, Damiano Gustavo; Lepore, Maria
2011-01-01
A monolithic silica gel matrix with entrapped glucose oxidase (GOD) was constructed as a bioactive element in an optical biosensor for glucose determination. Intrinsic fluorescence of free and immobilised GOD was investigated in the visible range in presence of different glucose concentrations by time-resolved spectroscopy with time-correlated single-photon counting detector. A three-exponential model was used for analysing the fluorescence transients. Fractional intensities and mean lifetime were shown to be sensitive to the enzymatic reaction and were used for obtaining calibration curve for glucose concentration determination. The sensing system proposed achieved high resolution (up to 0.17 mM) glucose determination with a detection range from 0.4 mM to 5 mM. PMID:22163807
Coherent spin transport through a 350 micron thick silicon wafer.
Huang, Biqin; Monsma, Douwe J; Appelbaum, Ian
2007-10-26
We use all-electrical methods to inject, transport, and detect spin-polarized electrons vertically through a 350-micron-thick undoped single-crystal silicon wafer. Spin precession measurements in a perpendicular magnetic field at different accelerating electric fields reveal high spin coherence with at least 13pi precession angles. The magnetic-field spacing of precession extrema are used to determine the injector-to-detector electron transit time. These transit time values are associated with output magnetocurrent changes (from in-plane spin-valve measurements), which are proportional to final spin polarization. Fitting the results to a simple exponential spin-decay model yields a conduction electron spin lifetime (T1) lower bound in silicon of over 500 ns at 60 K.
High-throughput screening in two dimensions: binding intensity and off-rate on a peptide microarray.
Greving, Matthew P; Belcher, Paul E; Cox, Conor D; Daniel, Douglas; Diehnelt, Chris W; Woodbury, Neal W
2010-07-01
We report a high-throughput two-dimensional microarray-based screen, incorporating both target binding intensity and off-rate, which can be used to analyze thousands of compounds in a single binding assay. Relative binding intensities and time-resolved dissociation are measured for labeled tumor necrosis factor alpha (TNF-alpha) bound to a peptide microarray. The time-resolved dissociation is fitted to a one-component exponential decay model, from which relative dissociation rates are determined for all peptides with binding intensities above background. We show that most peptides with the slowest off-rates on the microarray also have the slowest off-rates when measured by surface plasmon resonance (SPR). 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Toh, George; Jaramillo-Villegas, Jose A.; Glotzbach, Nathan; Quirk, Jonah; Stevenson, Ian C.; Choi, J.; Weiner, Andrew M.; Elliott, D. S.
2018-05-01
We report a measurement of the lifetime of the cesium 7 s 1/2 2S state using time-correlated single-photon counting spectroscopy in a vapor cell. We excite the atoms using a Doppler-free two-photon transition from the 6 s 1/2 2S ground state, and detect the 1.47 -μ m photons from the spontaneous decay of the 7 s 1/2 2S to the 6 p 3/2 2P state. We use a gated single-photon detector in an asynchronous mode, allowing us to capture the fluorescence profile for a window much larger than the detector gate length. Analysis of the exponential decay of the photon count yields a 7 s 1/2 2S lifetime of 48.28 ±0.07 ns, an uncertainty of 0.14%. These measurements provide sensitive tests of theoretical models of the Cs atom, which play a central role in parity violation measurements.
Krogh-cylinder and infinite-domain models for washout of an inert diffusible solute from tissue.
Secomb, Timothy W
2015-01-01
Models based on the Krogh-cylinder concept are developed to analyze the washout from tissue by blood flow of an inert diffusible solute that permeates blood vessel walls. During the late phase of washout, the outflowing solute concentration decays exponentially with time. This washout decay rate is predicted for a range of conditions. A single capillary is assumed to lie on the axis of a cylindrical tissue region. In the classic "Krogh-cylinder" approach, a no-flux boundary condition is applied on the outside of the cylinder. An alternative "infinite-domain" approach is proposed that allows for solute exchange across the boundary, but with zero net exchange. Both models are analyzed, using finite-element and analytical methods. The washout decay rate depends on blood flow rate, tissue diffusivity and vessel permeability of solute, and assumed boundary conditions. At low blood flow rates, the washout rate can exceed the value for a single well-mixed compartment. The infinite-domain approach predicts slower washout decay rates than the Krogh-cylinder approach. The infinite-domain approach overcomes a significant limitation of the Krogh-cylinder approach, while retaining its simplicity. It provides a basis for developing methods to deduce transport properties of inert solutes from observations of washout decay rates. © 2014 John Wiley & Sons Ltd.
Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378
Integrated stoichiometric, thermodynamic and kinetic modelling of steady state metabolism
Fleming, R.M.T.; Thiele, I.; Provan, G.; Nasheuer, H.P.
2010-01-01
The quantitative analysis of biochemical reactions and metabolites is at frontier of biological sciences. The recent availability of high-throughput technology data sets in biology has paved the way for new modelling approaches at various levels of complexity including the metabolome of a cell or an organism. Understanding the metabolism of a single cell and multi-cell organism will provide the knowledge for the rational design of growth conditions to produce commercially valuable reagents in biotechnology. Here, we demonstrate how equations representing steady state mass conservation, energy conservation, the second law of thermodynamics, and reversible enzyme kinetics can be formulated as a single system of linear equalities and inequalities, in addition to linear equalities on exponential variables. Even though the feasible set is non-convex, the reformulation is exact and amenable to large-scale numerical analysis, a prerequisite for computationally feasible genome scale modelling. Integrating flux, concentration and kinetic variables in a unified constraint-based formulation is aimed at increasing the quantitative predictive capacity of flux balance analysis. Incorporation of experimental and theoretical bounds on thermodynamic and kinetic variables ensures that the predicted steady state fluxes are both thermodynamically and biochemically feasible. The resulting in silico predictions are tested against fluxomic data for central metabolism in E. coli and compare favourably with in silico prediction by flux balance analysis. PMID:20230840
Porting plasma physics simulation codes to modern computing architectures using the
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Abbott, Stephen
2015-11-01
Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Tosun, İsmail
2012-01-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177
Tosun, Ismail
2012-03-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.
Sorption isotherm characteristics of aonla flakes.
Alam, Md Shafiq; Singh, Amarjit
2011-06-01
The equilibrium moisture content was determined for un-osmosed and osmosed (salt osmosed and sugar osmosed) aonla flakes using the static method at temperatures of 25, 40,50, 60 and 70 °C over a range of relative humidities from 20 to 90%. The sorption capacity of aonla decreased with an increase in temperature at constant water activity. The sorption isotherms exhibited hysteresis, in which the equilibrium moisture content was higher at a particular equilibrium relative humidity for desorption curve than for adsorption. The hysteresis effect was more pertinent for un-osmosed and salt osmosed samples in comparison to sugar osmosed samples. Five models namely the modified Chung Pfost, modified Halsey, modified Henderson, modified Exponential and Guggenheim-Anderson-de Boer (GAB) were evaluated to determine the best fit for the experimental data. For both adsorption and desorption process of aonla fruit, the equilibrium moisture content of un-osmosed and osmosed aonla samples can be predicted well by GAB model as well as modified Exponential model. Moreover, the modified Exponential model was found to be the best for describing the sorption behaviour of un-osmosed and salt osmosed samples while, GAB model for sugar osmosed aonla samples.
A model of non-Gaussian diffusion in heterogeneous media
NASA Astrophysics Data System (ADS)
Lanoiselée, Yann; Grebenkov, Denis S.
2018-04-01
Recent progress in single-particle tracking has shown evidence of the non-Gaussian distribution of displacements in living cells, both near the cellular membrane and inside the cytoskeleton. Similar behavior has also been observed in granular materials, turbulent flows, gels and colloidal suspensions, suggesting that this is a general feature of diffusion in complex media. A possible interpretation of this phenomenon is that a tracer explores a medium with spatio-temporal fluctuations which result in local changes of diffusivity. We propose and investigate an ergodic, easily interpretable model, which implements the concept of diffusing diffusivity. Depending on the parameters, the distribution of displacements can be either flat or peaked at small displacements with an exponential tail at large displacements. We show that the distribution converges slowly to a Gaussian one. We calculate statistical properties, derive the asymptotic behavior and discuss some implications and extensions.
Partially coherent isodiffracting pulsed beams
NASA Astrophysics Data System (ADS)
Koivurova, Matias; Ding, Chaoliang; Turunen, Jari; Pan, Liuzhan
2018-02-01
We investigate a class of isodiffracting pulsed beams, which are superpositions of transverse modes supported by spherical-mirror laser resonators. By employing modal weights that, for stationary light, produce a Gaussian Schell-model beam, we extend this standard model to pulsed beams. We first construct the two-frequency cross-spectral density function that characterizes the spatial coherence in the space-frequency domain. By assuming a power-exponential spectral profile, we then employ the generalized Wiener-Khintchine theorem for nonstationary light to derive the two-time mutual coherence function that describes the space-time coherence of the ensuing beams. The isodiffracting nature of the laser resonator modes permits all (paraxial-domain) calculations at any propagation distance to be performed analytically. Significant spatiotemporal coupling is revealed in subcycle, single-cycle, and few-cycle domains, where the partial spatial coherence also leads to reduced temporal coherence even though full spectral coherence is assumed.
Experimental realization of a one-way quantum computer algorithm solving Simon's problem.
Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G
2014-11-14
We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.
Adaptive guidance for an aero-assisted boost vehicle
NASA Astrophysics Data System (ADS)
Pamadi, Bandu N.; Taylor, Lawrence W., Jr.; Price, Douglas B.
An adaptive guidance system incorporating dynamic pressure constraint is studied for a single stage to low earth orbit (LEO) aero-assist booster with thrust gimbal angle as the control variable. To derive an adaptive guidance law, cubic spline functions are used to represent the ascent profile. The booster flight to LEO is divided into initial and terminal phases. In the initial phase, the ascent profile is continuously updated to maximize the performance of the boost vehicle enroute. A linear feedback control is used in the terminal phase to guide the aero-assisted booster onto the desired LEO. The computer simulation of the vehicle dynamics considers a rotating spherical earth, inverse square (Newtonian) gravity field and an exponential model for the earth's atmospheric density. This adaptive guidance algorithm is capable of handling large deviations in both atmospheric conditions and modeling uncertainties, while ensuring maximum booster performance.
Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.
2018-01-01
Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184
NASA Astrophysics Data System (ADS)
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
Cole-Davidson dynamics of simple chain models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dotson, Taylor C.; McCoy, John Dwane; Adolf, Douglas Brian
2008-10-01
Rotational relaxation functions of the end-to-end vector of short, freely jointed and freely rotating chains were determined from molecular dynamics simulations. The associated response functions were obtained from the one-sided Fourier transform of the relaxation functions. The Cole-Davidson function was used to fit the response functions with extensive use being made of Cole-Cole plots in the fitting procedure. For the systems studied, the Cole-Davidson function provided remarkably accurate fits [as compared to the transform of the Kohlrausch-Williams-Watts (KWW) function]. The only appreciable deviations from the simulation results were in the high frequency limit and were due to ballistic or freemore » rotation effects. The accuracy of the Cole-Davidson function appears to be the result of the transition in the time domain from stretched exponential behavior at intermediate time to single exponential behavior at long time. Such a transition can be explained in terms of a distribution of relaxation times with a well-defined longest relaxation time. Since the Cole-Davidson distribution has a sharp cutoff in relaxation time (while the KWW function does not), it makes sense that the Cole-Davidson would provide a better frequency-domain description of the associated response function than the KWW function does.« less
TIME-DOMAIN METHODS FOR DIFFUSIVE TRANSPORT IN SOFT MATTER
Fricks, John; Yao, Lingxing; Elston, Timothy C.; Gregory Forest, And M.
2015-01-01
Passive microrheology [12] utilizes measurements of noisy, entropic fluctuations (i.e., diffusive properties) of micron-scale spheres in soft matter to infer bulk frequency-dependent loss and storage moduli. Here, we are concerned exclusively with diffusion of Brownian particles in viscoelastic media, for which the Mason-Weitz theoretical-experimental protocol is ideal, and the more challenging inference of bulk viscoelastic moduli is decoupled. The diffusive theory begins with a generalized Langevin equation (GLE) with a memory drag law specified by a kernel [7, 16, 22, 23]. We start with a discrete formulation of the GLE as an autoregressive stochastic process governing microbead paths measured by particle tracking. For the inverse problem (recovery of the memory kernel from experimental data) we apply time series analysis (maximum likelihood estimators via the Kalman filter) directly to bead position data, an alternative to formulas based on mean-squared displacement statistics in frequency space. For direct modeling, we present statistically exact GLE algorithms for individual particle paths as well as statistical correlations for displacement and velocity. Our time-domain methods rest upon a generalization of well-known results for a single-mode exponential kernel [1, 7, 22, 23] to an arbitrary M-mode exponential series, for which the GLE is transformed to a vector Ornstein-Uhlenbeck process. PMID:26412904
Variable δD values among major biochemicals in plants: Implications for environmental studies
NASA Astrophysics Data System (ADS)
DeBond, Nicole; Fogel, Marilyn L.; Morrill, Penny L.; Benner, Ronald; Bowden, Roxane; Ziegler, Susan
2013-06-01
The stable hydrogen isotope composition (δD) of major plant biochemicals is variable. We present δD values for cellulose, hemicelluloses and lignin of six plant species. The δD value for lignin is consistently lower than that of bulk tissue (by ˜50‰) and cellulose (by ˜100‰). We show that these differences can be used to assess the extent of degradation of organic matter from a single source. A decrease in the δDbulk of decomposing Spartina alterniflora roots and rhizomes from -72‰ to -87‰ was observed over 18 months, reflecting a relative enrichment of lignin content due to the preferential removal of polysaccharides from the detrital material. Similar changes in δ13C were observed previously during the degradation of these plant tissues. These findings indicate that the extent of organic matter degradation should be considered when using stable isotope approaches to assess possible sources of organic matter in soils and sediments. We show that the change in δDbulk of plant detritus is best described by an exponential equation, which is simpler than the multiple exponential decay (multi-G) model which best describes the change in δ13Cbulk of plant detritus. Therefore correcting for isotopic shifts caused by decomposition may be more easily accomplished using δD.
Adaptive kanban control mechanism for a single-stage hybrid system
NASA Astrophysics Data System (ADS)
Korugan, Aybek; Gupta, Surendra M.
2002-02-01
In this paper, we consider a hybrid manufacturing system with two discrete production lines. Here the output of either production line can satisfy the demand for the same type of product without any penalties. The interarrival times for demand occurrences and service completions are exponentially distributed i.i.d. variables. In order to control this type of manufacturing system we suggest a single stage pull type control mechanism with adaptive kanbans and state independent routing of the production information.
Makrinich, Maria; Gupta, Rupal; Polenova, Tatyana; Goldbourt, Amir
The ability of various pulse types, which are commonly applied for distance measurements, to saturate or invert quadrupolar spin polarization has been compared by observing their effect on magnetization recovery curves under magic-angle spinning. A selective central transition inversion pulse yields a bi-exponential recovery for a diamagnetic sample with a spin-3/2, consistent with the existence of two processes: the fluctuations of the electric field gradients with identical single (W 1 ) and double (W 2 ) quantum quadrupolar-driven relaxation rates, and spin exchange between the central transition of one spin and satellite transitions of a dipolar-coupled similar spin. Using a phase modulated pulse, developed for distance measurements in quadrupolar spins (Nimerovsky et al., JMR 244, 2014, 107-113) and suggested for achieving the complete saturation of all quadrupolar spin energy levels, a mono-exponential relaxation model fits the data, compatible with elimination of the spin exchange processes. Other pulses such as an adiabatic pulse lasting one-third of a rotor period, and a two-rotor-period long continuous-wave pulse, both used for distance measurements under special experimental conditions, yield good fits to bi-exponential functions with varying coefficients and time constants due to variations in initial conditions. Those values are a measure of the extent of saturation obtained from these pulses. An empirical fit of the recovery curves to a stretched exponential function can provide general recovery times. A stretching parameter very close to unity, as obtained for a phase modulated pulse but not for other cases, suggests that in this case recovery times and longitudinal relaxation times are similar. The results are experimentally demonstrated for compounds containing 11 B (spin-3/2) and 51 V (spin-7/2). We propose that accurate spin lattice relaxation rates can be measured by a short phase modulated pulse (<1-2ms), similarly to the "true T 1 " measured by saturation with an asynchronous pulse train (Yesinowski, JMR 252, 2015, 135-144). Copyright © 2017 Elsevier Inc. All rights reserved.
Yang, Shiju; Li, Chuandong; Huang, Tingwen
2016-03-01
The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of Dibenzothiophene Desulfurization in a Recombinant Pseudomonas putida Strain▿
Calzada, Javier; Zamarro, María T.; Alcón, Almudena; Santos, Victoria E.; Díaz, Eduardo; García, José L.; Garcia-Ochoa, Felix
2009-01-01
Biodesulfurization was monitored in a recombinant Pseudomonas putida CECT5279 strain. DszB desulfinase activity reached a sharp maximum at the early exponential phase, but it rapidly decreased at later growth phases. A model two-step resting-cell process combining sequentially P. putida cells from the late and early exponential growth phases was designed to significantly increase biodesulfurization. PMID:19047400
Erik A. Lilleskov
2017-01-01
Fungal respiration contributes substantially to ecosystem respiration, yet its field temperature response is poorly characterized. I hypothesized that at diurnal time scales, temperature-respiration relationships would be better described by unimodal than exponential models, and at longer time scales both Q10 and mass-specific respiration at 10 °...
Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U
2009-05-01
In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.
Porto, Markus; Roman, H Eduardo
2002-04-01
We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Nonlinear analogue of the May−Wigner instability transition
Fyodorov, Yan V.; Khoruzhenko, Boris A.
2016-01-01
We study a system of N≫1 degrees of freedom coupled via a smooth homogeneous Gaussian vector field with both gradient and divergence-free components. In the absence of coupling, the system is exponentially relaxing to an equilibrium with rate μ. We show that, while increasing the ratio of the coupling strength to the relaxation rate, the system experiences an abrupt transition from a topologically trivial phase portrait with a single equilibrium into a topologically nontrivial regime characterized by an exponential number of equilibria, the vast majority of which are expected to be unstable. It is suggested that this picture provides a global view on the nature of the May−Wigner instability transition originally discovered by local linear stability analysis. PMID:27274077
Anderson localization of partially incoherent light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capeta, D.; Radic, J.; Buljan, H.
We study Anderson localization and propagation of partially spatially incoherent wavepackets in linear disordered potentials, motivated by the insight that interference phenomena resulting from multiple scattering are affected by the coherence of the waves. We find that localization is delayed by incoherence: the more incoherent the waves are, the longer they diffusively spread while propagating in the medium. However, if all the eigenmodes of the system are exponentially localized (as in one- and two-dimensional disordered systems), any partially incoherent wavepacket eventually exhibits localization with exponentially decaying tails, after sufficiently long propagation distances. Interestingly, we find that the asymptotic behavior ofmore » the incoherent beam is similar to that of a single instantaneous coherent realization of the beam.« less
Using phenomenological models for forecasting the 2015 Ebola challenge.
Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo
2018-03-01
The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jamaluddin, Fadhilah; Rahim, Rahela Abdul
2015-12-01
Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.
2015-01-01
This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857
Déjardin, P
2013-08-30
The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Errington, Jeffrey Richard
This work focuses on the development of intermolecular potential models for real fluids. United-atom models have been developed for both non-polar and polar fluids. The models have been optimized to the vapor-liquid coexistence properties. Histogram reweighting techniques were used to calculate phase behavior. The Hamiltonian scaling grand canonical Monte Carlo method was developed to enable the determination of thermodynamic properties of several related Hamiltonians from a single simulation. With this method, the phase behavior of variations of the Buckingham exponential-6 potential was determined. Reservoir grand canonical Monte Carlo simulations were developed to simulate molecules with complex architectures and/or stiff intramolecular constraints. The scheme is based on the creation of a reservoir of ideal chains from which structures are selected for insertion during a simulation. New intermolecular potential models have been developed for water, the n-alkane homologous series, benzene, cyclohexane, carbon dioxide, ammonia and methanol. The models utilize the Buckingham exponential-6 potential to model non-polar interactions and point charges to describe polar interactions. With the exception of water, the new models reproduce experimental saturated densities, vapor pressures and critical parameters to within a few percent. In the case of water, we found a set of parameters that describes the phase behavior better than other available point charge models while giving a reasonable description of the liquid structure. The mixture behavior of water-hydrocarbon mixtures has also been examined. The Henry's law constants of methane, ethane, benzene and cyclohexane in water were determined using Widom insertion and expanded ensemble techniques. In addition the high-pressure phase behavior of water-methane and water-ethane systems was studied using the Gibbs ensemble method. The results from this study indicate that it is possible to obtain a good description of the phase behavior of pure components using united-atom models. The mixture behavior of non-polar systems, including highly asymmetric components, was in good agreement with experiment. The calculations for the highly non-ideal water-hydrocarbon mixtures reproduced experimental behavior with varying degrees of success. The results indicate that multibody effects, such as polarizability, must be taken into account when modeling mixtures of polar and non-polar components.
NASA Astrophysics Data System (ADS)
Kamimura, Atsushi; Kaneko, Kunihiko
2018-03-01
Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary for exponential growth at the primitive stage of life.
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-08-01
This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.
NASA Technical Reports Server (NTRS)
1971-01-01
A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.
Transient Postseismic Relaxation With Burger's Body Viscoelasticity
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Hager, B. H.; O'Connell, R. J.
2002-12-01
Typical models used to investigate postseismic deformation are composed of an elastic layer over a Maxwell viscoelastic region. Geodetic observations made after a number of large earthquakes show a rapid exponential decay in postseismic velocity immediately after the rupture, followed by a more slowly decaying (or constant) velocity at a later time. Models of a Maxwell viscoelastic interior predict a single exponential postseismic velocity relaxation. To account for observed rapid, short-term relaxation decay, surprisingly low viscosities in the lower-crust or upper-mantle have been proposed. To model the difference in short and long time decay rates, the Maxwell element is sometimes modified to have a non-linear rheology, which results in a lower effective viscosity immediately after the rupture, evolving to a higher effective viscosity as the co-seismic stresses relax. Incorporation of models of after-slip in the lower crust on a down-dip extension of the fault have also had some success at modeling the above observations. When real rocks are subjected to a sudden change in stress or strain, e.g., that caused by an earthquake, they exhibit a transient response. The transient deformation is typically accommodated by grain boundary sliding and the longer-time deformation is accommodated by motion of dislocations. Both a short-term transient response and long-term steady creep are exhibited by a Burger's body, a Maxwell element (a spring in series with a viscous dash-pot) in series with a Voigt element (a spring in parallel with a viscous dash-pot). Typically the (transient) viscosity of the Voigt element is 10 - 100 times less than the (steady) viscosity of the Maxwell element. Thus, with a Burger's body, stress relaxation is a superposition of two exponential decays. For a model composed of an elastic layer over a viscoelastic region, the coseismic changes in stress (and strain) depend only on the elastic moduli, and are independent of the description of the viscous component of the rheology. In a Burger's body model of viscoelasticity, if the viscosity of the Voigt element is much less than that of the Maxwell element, the initial relaxation time is given by the decay time τ = η {Voigt}}/G{ {Maxwell}. Whereas, for a Maxwell rheology, the initial relaxation time is given by τ = η {Maxwell}}/G{ {Maxwell}. For both models, the initial spatial distribution of stresses is the same, which results in identical initial spatial distribution of velocities. Thus it is easy to mistake the transient response of a Burger's body for that of a Maxwell rheology with unrealistically low viscosity. Only later in the seismic cycle do the spatial patterns of velocity differ for the two rheologies.
Color Profile Trends of Dwarf Galaxies
NASA Astrophysics Data System (ADS)
Herrmann, Kimberly A.; LITTLE THINGS Team
2012-01-01
Radial stellar surface brightness profiles of spiral galaxies can be classified into three types: (I) single exponential, (II) truncated: the light falls off with one exponential out to a break radius and then falls off more steeply, and (III) anti-truncated: the light falls off with one exponential out to a break radius and then falls off less steeply. Stellar surface brightness profile breaks are also found in dwarf disk galaxies, but with an additional category: (FI) flat-inside: the light is roughly constant or increasing and then falls off beyond a break. Additionally, Bakos, Trujillo, & Pohlen (2008) showed that for spirals, each profile type has a characteristic color trend with respect to the break location. Furthermore, color trends reveal information about possible stellar population changes at the breaks. Here we show color trends for the four profile types from a large multi-wavelength photometric study of dwarf disk galaxies (the 141 dwarf parent sample of the LITTLE THINGS galaxies). We explore the similarities and differences between spirals and dwarfs and also between different colors. We gratefully acknowledge funding for this research from the National Science Foundation (AST-0707563).
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
NASA Technical Reports Server (NTRS)
Curreri, Peter A.; Nall, Mark
2013-01-01
The cost of energy is humanity's economic exchange rate with the universe. Space solar power is the first great step that our technological species has to utilize the energy of its star. The classic Peter Glaser Solar Power Satellite, SPS, and later designs collect a large area of solar energy in space and beam it back to Earth for use in the electric grid, but even with optimistic launch costs and technology innovation a clear economic path is not evident using Earth launch of SPS. O Neill in 1969 solved the transportation costs problem by a model that uses lunar and asteroid materials to build SPS and locates the labor force permanently in space (O Neill free space habitats). This solution closes the economics and predicts large profits after 17-35 years. However the costs of time have up to now prevented this solution. We discuss a strategy to move forward in SPS with the motivations to stop global warming and prevent human selfextinction. There are near term steps that can be taken that place us on this path forward. First, we must reevaluate the technologies for the classic model and update the parameters to current technology. As technological capability continues to increase exponentially, we need to understand when the monetary potential energy hills are small as the technology gets larger. But the chance for self-extinction, if humanity remains in a single vulnerable habitat, also increased exponentially with time. The path forward is to identify investment points while assessing the risks of non-action.
Simultaneous water activation and glucose metabolic rate imaging with PET
NASA Astrophysics Data System (ADS)
Verhaeghe, Jeroen; Reader, Andrew J.
2013-02-01
A novel imaging and signal separation strategy is proposed to be able to separate [18F]FDG and multiple [15O]H2O signals from a simultaneously acquired dynamic PET acquisition of the two tracers. The technique is based on the fact that the dynamics of the two tracers are very distinct. By adopting an appropriate bolus injection strategy and by defining tailored sets of basis functions that model either the FDG or water component, it is possible to separate the FDG and water signal. The basis functions are inspired from the spectral analysis description of dynamic PET studies and are defined as the convolution of estimated generating functions (GFs) with a set of decaying exponential functions. The GFs are estimated from the overall measured head curve, while the decaying exponential functions are pre-determined. In this work, the time activity curves (TACs) are modelled post-reconstruction but the model can be incorporated in a global 4D reconstruction strategy. Extensive PET simulation studies are performed considering single [18F]FDG and 6 [15O]H2O bolus injections for a total acquisition time of 75 min. The proposed method is evaluated at multiple noise levels and different parameters were estimated such as [18F]FDG uptake and blood flow estimated from the [15O]H2O component, requiring a full dynamic analysis of the two components, static images of [18F]FDG and the water components as well as [15O]H2O activation. It is shown that the resulting images and parametric values in ROIs are comparable to images obtained from separate imaging, illustrating the feasibility of simultaneous imaging of [18F]FDG and [15O]H2O components. For more information on this article, see medicalphysicsweb.org
NASA Astrophysics Data System (ADS)
Terazono, Hideyuki; Kim, Hyonchol; Nomura, Fumimasa; Yasuda, Kenji
2016-06-01
We developed a microprocessing-assisted technique to select single-strand DNA aptamers that bind to unknown targets on the cell surface by modifying the conventional systematic evolution of ligands by exponential enrichment (cell-SELEX). Our technique involves 1) the specific selection of target-cell-surface-bound aptamers without leakage of intracellular components by trypsinization and 2) cloning of aptamers by microprocessing-assisted picking of single cells using magnetic beads. After cell-SELEX, the enriched aptamers were conjugated with magnetic beads. The aptamer-magnetic beads conjugates attached to target cells were collected individually by microassisted procedures using microneedles under a microscope. After that, the sequences of the collected magnetic-bead-bound aptamers were identified. As a result, a specific aptamer for the surface of target cells, e.g., human umbilical vein endothelial cells (HUVECs), was chosen and its specificity was examined using other cell types, e.g., HeLa cells. The results indicate that this microprocessing-assisted cell-SELEX method for identifying aptamers is applicable in biological research and clinical diagnostics.
NASA Astrophysics Data System (ADS)
Yao, Weiping; Yang, Chaohui; Jing, Jiliang
2018-05-01
From the viewpoint of holography, we study the behaviors of the entanglement entropy in insulator/superconductor transition with exponential nonlinear electrodynamics (ENE). We find that the entanglement entropy is a good probe to the properties of the holographic phase transition. Both in the half space and the belt space, the non-monotonic behavior of the entanglement entropy in superconducting phase versus the chemical potential is general in this model. Furthermore, the behavior of the entanglement entropy for the strip geometry shows that the confinement/deconfinement phase transition appears in both insulator and superconductor phases. And the critical width of the confinement/deconfinement phase transition depends on the chemical potential and the exponential coupling term. More interestingly, the behaviors of the entanglement entropy in their corresponding insulator phases are independent of the exponential coupling factor but depends on the width of the subsystem A.
Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad
2018-04-21
In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.
The removal of myoplasmic free calcium following calcium release in frog skeletal muscle.
Melzer, W; Ríos, E; Schneider, M F
1986-01-01
Transient changes in intracellular free calcium concentration (delta [Ca2+]) in response to pulse depolarizations were monitored in isolated segments of single frog skeletal muscle fibres cut at both ends and voltage clamped at a holding potential of -90 mV in a double-Vaseline-gap chamber. Calcium transients were monitored optically using the metallochromic indicator dye Antipyrylazo III (APIII), which entered the fibre by diffusion from the solution applied to the cut ends. Optical artifacts due to fibre movement were minimized or eliminated by stretching the fibres to sarcomere lengths at which there was little or no overlap of thick and thin contractile filaments. Remaining movement-independent optical changes intrinsic to the fibre and unrelated to the dye were monitored at 850 nm, where free and dye-bound APIII have no absorbance. These 850 nm signals scaled by lambda -1.2 were used to remove intrinsic components from the signals at 700 or 720 nm, wave-lengths at which the APIII absorbance increases when calcium is bound. The corrected 700 or 720 nm signals were used to calculate delta [Ca2+]. The decay of delta [Ca2+] following fibre repolarization at the termination of a depolarizing pulse was well described by a single exponential plus a constant. The exponential rate constant for the decay of delta [Ca2+] decreased and the final 'steady' level that delta [Ca2+] appeared to be approaching increased with increasing amplitude and/or duration of the depolarizing pulse. Both the decreasing decay rate and the build up of the 'steady' level can be accounted for using a two-component model for the removal of free calcium from the myoplasm. One component consists of a set number of a single type of saturable calcium binding site in the myoplasm. The second component is a non-saturable, first-order uptake mechanism operating in parallel with the saturable binding sites. The removal model parameter values were adjusted to fit simultaneously the decay of delta [Ca2+] after pulses of various amplitudes and durations in a given fibre. The basic procedure was to track delta [Ca2+] during each pulse when an undetermined calcium release was occurring, but to calculate the decay of delta [Ca2+] starting 14 ms after repolarization when release was assumed to be negligible. After appropriate selection of parameter values, the model reproduced most aspects of the decay of delta [Ca2+].(ABSTRACT TRUNCATED AT 400 WORDS) PMID:3487641
NASA Astrophysics Data System (ADS)
Maitra, Rahul; Akinaga, Yoshinobu; Nakajima, Takahito
2017-08-01
A single reference coupled cluster theory that is capable of including the effect of connected triple excitations has been developed and implemented. This is achieved by regrouping the terms appearing in perturbation theory and parametrizing through two different sets of exponential operators: while one of the exponentials, involving general substitution operators, annihilates the ground state but has a non-vanishing effect when it acts on the excited determinant, the other is the regular single and double excitation operator in the sense of conventional coupled cluster theory, which acts on the Hartree-Fock ground state. The two sets of operators are solved as coupled non-linear equations in an iterative manner without significant increase in computational cost than the conventional coupled cluster theory with singles and doubles excitations. A number of physically motivated and computationally advantageous sufficiency conditions are invoked to arrive at the working equations and have been applied to determine the ground state energies of a number of small prototypical systems having weak multi-reference character. With the knowledge of the correlated ground state, we have reconstructed the triple excitation operator and have performed equation of motion with coupled cluster singles, doubles, and triples to obtain the ionization potential and excitation energies of these molecules as well. Our results suggest that this is quite a reasonable scheme to capture the effect of connected triple excitations as long as the ground state remains weakly multi-reference.
Exponential integration algorithms applied to viscoplasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Walker, Kevin P.
1991-01-01
Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.
Estimating time since infection in early homogeneous HIV-1 samples using a poisson model
2010-01-01
Background The occurrence of a genetic bottleneck in HIV sexual or mother-to-infant transmission has been well documented. This results in a majority of new infections being homogeneous, i.e., initiated by a single genetic strain. Early after infection, prior to the onset of the host immune response, the viral population grows exponentially. In this simple setting, an approach for estimating evolutionary and demographic parameters based on comparison of diversity measures is a feasible alternative to the existing Bayesian methods (e.g., BEAST), which are instead based on the simulation of genealogies. Results We have devised a web tool that analyzes genetic diversity in acutely infected HIV-1 patients by comparing it to a model of neutral growth. More specifically, we consider a homogeneous infection (i.e., initiated by a unique genetic strain) prior to the onset of host-induced selection, where we can assume a random accumulation of mutations. Previously, we have shown that such a model successfully describes about 80% of sexual HIV-1 transmissions provided the samples are drawn early enough in the infection. Violation of the model is an indicator of either heterogeneous infections or the initiation of selection. Conclusions When the underlying assumptions of our model (homogeneous infection prior to selection and fast exponential growth) are met, we are under a very particular scenario for which we can use a forward approach (instead of backwards in time as provided by coalescent methods). This allows for more computationally efficient methods to derive the time since the most recent common ancestor. Furthermore, the tool performs statistical tests on the Hamming distance frequency distribution, and outputs summary statistics (mean of the best fitting Poisson distribution, goodness of fit p-value, etc). The tool runs within minutes and can readily accommodate the tens of thousands of sequences generated through new ultradeep pyrosequencing technologies. The tool is available on the LANL website. PMID:20973976
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
Time-resolved scattering of a single photon by a single atom
Leong, Victor; Seidler, Mathias Alexander; Steiner, Matthias; Cerè, Alessandro; Kurtsiefer, Christian
2016-01-01
Scattering of light by matter has been studied extensively in the past. Yet, the most fundamental process, the scattering of a single photon by a single atom, is largely unexplored. One prominent prediction of quantum optics is the deterministic absorption of a travelling photon by a single atom, provided the photon waveform matches spatially and temporally the time-reversed version of a spontaneously emitted photon. Here we experimentally address this prediction and investigate the influence of the photon's temporal profile on the scattering dynamics using a single trapped atom and heralded single photons. In a time-resolved measurement of atomic excitation we find a 56(11)% increase of the peak excitation by photons with an exponentially rising profile compared with a decaying one. However, the overall scattering probability remains unchanged within the experimental uncertainties. Our results demonstrate that envelope tailoring of single photons enables precise control of the photon–atom interaction. PMID:27897173
NASA Astrophysics Data System (ADS)
Ozawa, T.; Miyagi, Y.
2017-12-01
Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.
An approximation method for improving dynamic network model fitting.
Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M
There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.
Brette, Romain; Gerstner, Wulfram
2005-11-01
We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Exponentiated power Lindley distribution.
Ashour, Samir K; Eltehiwy, Mahmoud A
2015-11-01
A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.
Voter model with non-Poissonian interevent intervals
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Masuda, Naoki
2011-09-01
Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
Kinetic and Stochastic Models of 1D yeast ``prions"
NASA Astrophysics Data System (ADS)
Kunes, Kay
2005-03-01
Mammalian prion proteins (PrP) are of public health interest because of mad cow and chronic wasting diseases. Yeasts have proteins, which can undergo similar reconformation and aggregation processes to PrP; yeast ``prions" are simpler to experimentally study and model. Recent in vitro studies of the SUP35 protein (1), showed long aggregates and pure exponential growth of the misfolded form. To explain this data, we have extended a previous model of aggregation kinetics along with our own stochastic approach (2). Both models assume reconformation only upon aggregation, and include aggregate fissioning and an initial nucleation barrier. We find for sufficiently small nucleation rates or seeding by small dimer concentrations that we can achieve the requisite exponential growth and long aggregates.
Pendulum Mass Affects the Measurement of Articular Friction Coefficient
Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.
2012-01-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223
Pendulum mass affects the measurement of articular friction coefficient.
Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C
2013-02-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
Roy, Debjit; Mandal, Saptarshi; De, Chayan K; Kumar, Kaushalendra; Mandal, Prasun K
2018-04-18
CdSe-based core/gradient alloy shell/shell semiconductor quantum dots (CGASS QDs) have been shown to be optically quite superior compared to core-shell QDs. However, very little is known about CGASS QDs at the single particle level. Photoluminescence blinking dynamics of four differently emitting (blue (λem = 510), green (λem = 532), orange (λem = 591), and red (λem = 619)) single CGASS QDs having average sizes <∼7 nm have been probed in our home-built total internal reflection fluorescence (TIRF) microscope. All four samples possess an average ON-fraction of 0.70-0.85, which hints towards nearly suppressed PL blinking in these gradiently alloyed systems. Suppression of blinking has been so far achieved with QDs having sizes greater than 10 nm and mostly emitting in the red region (λem > 600 nm). In this manuscript, we report nearly suppressed PL blinking behaviour of CGASS QDs with average sizes <∼7 nm and emitting in the entire range of the visible spectrum, i.e. from blue to green to orange to red. The probability density distribution of both ON- and OFF-event durations for all of these CGASS QDs could be fitted well with a modified inverse truncated power law with an additional exponential model equation. It has been found that unlike most of the literature reports, the power law exponent for OFF-event durations is greater than the power law exponent for ON-event durations for all four samples. This suggests that relatively large ON-event durations are interrupted by comparatively small OFF-event durations. This in turn is indicative of a suppressed non-radiative Auger recombination process for these CGASS systems. However, in these four different samples the ON-event truncation time varies inversely with the OFF-event truncation time, which hints that both the ON- and OFF-event truncation processes are dictated by some common factor. We have employed 2D joint probability distribution analysis to probe the correlation between the event durations and found that residual memory exists in both the ON- and OFF-event durations. Positively correlated successive ON-ON and OFF-OFF event durations and negatively correlated (anti-correlated) ON-OFF event durations perhaps suggest the involvement of more than one type of trapping process within the blinking framework. The timescale corresponding to the additional exponential term has been assigned to hole trapping for ON-event duration statistics. Similarly, for OFF-event duration statistics, this component suggests hole detrapping. We found that the average duration of the exponential process for the ON-event durations is an order of magnitude higher than that of the OFF-event durations. This indicates that the holes are trapped for a significantly long time. When electron trapping is followed by such a hole trapping, long ON-event durations result. We have observed long ON-event durations, as high as 50 s. The competing charge tunnelling model has been used to account for the observed blinking behaviour in these CGASS QDs. Quite interestingly, the PLQY of all of these differently emitting QDs (an ensemble level property) could be correlated with the truncation time (a property at the single particle level). A respective concomitant increase-decrease of ON-OFF event truncation times with increasing PLQY is also indicative of a varying degree of suppression of the Auger recombination processes in these four different CGASS QDs.
The multiple complex exponential model and its application to EEG analysis
NASA Astrophysics Data System (ADS)
Chen, Dao-Mu; Petzold, J.
The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.
Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin
2016-12-01
To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.
Bayesian view of single-qubit clocks, and an energy versus accuracy tradeoff
NASA Astrophysics Data System (ADS)
Gopalkrishnan, Manoj; Kandula, Varshith; Sriram, Praveen; Deshpande, Abhishek; Muralidharan, Bhaskaran
2017-09-01
We bring a Bayesian approach to the analysis of clocks. Using exponential distributions as priors for clocks, we analyze how well one can keep time with a single qubit freely precessing under a magnetic field. We find that, at least with a single qubit, quantum mechanics does not allow exact timekeeping, in contrast to classical mechanics, which does. We find the design of the single-qubit clock that leads to maximum accuracy. Further, we find an energy versus accuracy tradeoff—the energy cost is at least kBT times the improvement in accuracy as measured by the entropy reduction in going from the prior distribution to the posterior distribution. We propose a physical realization of the single-qubit clock using charge transport across a capacitively coupled quantum dot.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Infinite-disorder critical points of models with stretched exponential interactions
NASA Astrophysics Data System (ADS)
Juhász, Róbert
2014-09-01
We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.
Global exponential stability for switched memristive neural networks with time-varying delays.
Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia
2016-08-01
This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Deneke, Carlus; Lipowsky, Reinhard; Valleriani, Angelo
2013-01-01
Experimental studies on mRNA stability have established several, qualitatively distinct decay patterns for the amount of mRNA within the living cell. Furthermore, a variety of different and complex biochemical pathways for mRNA degradation have been identified. The central aim of this paper is to bring together both the experimental evidence about the decay patterns and the biochemical knowledge about the multi-step nature of mRNA degradation in a coherent mathematical theory. We first introduce a mathematical relationship between the mRNA decay pattern and the lifetime distribution of individual mRNA molecules. This relationship reveals that the mRNA decay patterns at steady state expression level must obey a general convexity condition, which applies to any degradation mechanism. Next, we develop a theory, formulated as a Markov chain model, that recapitulates some aspects of the multi-step nature of mRNA degradation. We apply our theory to experimental data for yeast and explicitly derive the lifetime distribution of the corresponding mRNAs. Thereby, we show how to extract single-molecule properties of an mRNA, such as the age-dependent decay rate and the residual lifetime. Finally, we analyze the decay patterns of the whole translatome of yeast cells and show that yeast mRNAs can be grouped into three broad classes that exhibit three distinct decay patterns. This paper provides both a method to accurately analyze non-exponential mRNA decay patterns and a tool to validate different models of degradation using decay data. PMID:23408982
NASA Astrophysics Data System (ADS)
Weiss, J. R.; Saunders, A.; Qiu, Q.; Foster, J. H.; Gomez, D.; Bevis, M. G.; Smalley, R., Jr.; Cimbaro, S.; Lenzano, L. E.; Barón, J.; Baez, J. C.; Echalar, A.; Avery, J.; Wright, T. J.
2017-12-01
We use a large regional network of continuous GPS sites to investigate postseismic deformation following the Mw 8.8 Maule and Mw 8.1 Pisagua earthquakes in Chile. Geodetic observations of surface displacements associated with megathrust earthquakes aid our understanding of the subduction zone earthquake cycle including postseismic processes such as afterslip and viscoelastic relaxation. The observations also help place constraints on the rheology and structure of the crust and upper mantle. We first empirically model the data and find that, while single-term logarithmic functions adequately fit the postseismic timeseries, they do a poor job of characterizing the rapid displacements in the days to weeks following the earthquakes. Combined exponential-logarithmic functions better capture the inferred near-field transition between afterslip and viscous relaxation, however displacements are best fit by three-term exponential functions with characteristic decay times of 15, 250, and 1500 days. Viscoelastic modeling of the velocity field and timeseries following the Maule earthquake suggests that the rheology is complex but is consistent with a 100-km-thick asthenosphere channel of viscosity 1018 Pa s sandwiched between a 40-km-thick elastic lid and a strong viscoelastic upper mantle. Variations in lid thickness of up to 40 km may be present and in some locations rapid deformation within the first months to years following the Maule event requires an even lower effective viscosity or a significant contribution from afterslip. We investigate this further by jointly inverting the GPS data for the time evolution of afterslip and viscous flow in the mantle wedge surrounding the Maule event.
New method to calculate the N2 evolution from mixed venous blood during the N2 washout.
Han, D; Jeng, D R; Cruz, J C; Flores, X F; Mallea, J M
2001-08-01
To model the normalized phase III slope (Sn) from N2 expirograms of the multibreath N2 washout is a challenge to researchers. Experimental measurements show that Sn increases with the number of breaths. Previously, we predicted Sn by setting the concentration (atm) of mixed venous blood (Fbi,N2) to a constant value of 0.3 after the fifth breath to calculate the amount of N2 transferred from the blood to the alveoli. As a consequence, the predicted curve of the Sn values showed a maximum before the quasi-steady state was reached. In this paper, we present a way of calculating the amount of N2 transferred from the blood to the alveoli by setting Fbi,N2 in the following way: In the first six breaths Fbi,N2 is kept constant at the initial value of 0.8 because circulation time needs at least 30 s to alter it. Thereafter, a single exponential function with respect the number of breaths is used: Fbi = 0.8 exp[0.112(6-n)], in which n is the breath number. The predicted Sn values were compared with experimental data from the literature. The assumption of an exponential decay in the N2 evolved from mixed venous blood is important in determining the shape of the Sn curve but new experimental data are needed to determine the validity of the model. We concluded that this new approach to calculate the N2 evolution from the blood is more meaningful physiologically.
Westö, Johan; May, Patrick J C
2018-05-02
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Lahmiri, S.; Boukadoum, M.
2015-10-01
Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Exponentially growing tearing modes in Rijnhuizen Tokamak Project plasmas.
Salzedas, F; Schüller, F C; Oomens, A A M
2002-02-18
The local measurement of the island width w, around the resonant surface, allowed a direct test of the extended Rutherford model [P. H. Rutherford, PPPL Report-2277 (1985)], describing the evolution of radiation-induced tearing modes prior to disruptions of tokamak plasmas. It is found that this model accounts very well for the observed exponential growth and supports radiation losses as being the main driving mechanism. The model implies that the effective perpendicular electron heat conductivity in the island is smaller than the global one. Comparison of the local measurements of w with the magnetic perturbed field B showed that w proportional to B1/2 was valid for widths up to 18% of the minor radius.
NASA Astrophysics Data System (ADS)
Adame, J.; Warzel, S.
2015-11-01
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adame, J.; Warzel, S., E-mail: warzel@ma.tum.de
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
Disentangling the f(R)-duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broy, Benedict J.; Pedro, Francisco G.; Westphal, Alexander
2015-03-16
Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1
Disentangling the f(R)-duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broy, Benedict J.; Westphal, Alexander; Pedro, Francisco G., E-mail: benedict.broy@desy.de, E-mail: francisco.pedro@desy.de, E-mail: alexander.westphal@desy.de
2015-03-01
Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
Kim, Ghiseok; Kim, Geon Hee; Ahn, Chi-Kook; Yoo, Yoonkyu; Cho, Byoung-Kwan
2013-01-01
An infrared lifetime thermal imaging technique for the measurement of lettuce seed viability was evaluated. Thermal emission signals from mid-infrared images of healthy seeds and seeds aged for 24, 48, and 72 h were obtained and reconstructed using regression analysis. The emission signals were fitted with a two-term exponential model that had two amplitudes and two time variables as lifetime parameters. The lifetime thermal decay parameters were significantly different for seeds with different aging times. Single-seed viability was visualized using thermal lifetime images constructed from the calculated lifetime parameter values. The time-dependent thermal signal decay characteristics, along with the decay amplitude and delay time images, can be used to distinguish aged lettuce seeds from normal seeds. PMID:23529120
Charge Transport in Carbon Nanotubes-Polymer Composite Photovoltaic Cells
Ltaief, Adnen; Bouazizi, Abdelaziz; Davenas, Joel
2009-01-01
We investigate the dark and illuminated current density-voltage (J/V) characteristics of poly(2-methoxy-5-(2’-ethylhexyloxy)1-4-phenylenevinylene) (MEH-PPV)/single-walled carbon nanotubes (SWNTs) composite photovoltaic cells. Using an exponential band tail model, the conduction mechanism has been analysed for polymer only devices and composite devices, in terms of space charge limited current (SCLC) conduction mechanism, where we determine the power parameters and the threshold voltages. Elaborated devices for MEH-PPV:SWNTs (1:1) composites showed a photoresponse with an open-circuit voltage Voc of 0.4 V, a short-circuit current density JSC of 1 µA/cm² and a fill factor FF of 43%. We have modelised the organic photovoltaic devices with an equivalent circuit, where we calculated the series and shunt resistances.
Coherent forward broadening in cold atom clouds
NASA Astrophysics Data System (ADS)
Sutherland, R. T.; Robicheaux, F.
2016-02-01
It is shown that homogeneous line-broadening in a diffuse cold atom cloud is proportional to the resonant optical depth of the cloud. Furthermore, it is demonstrated how the strong directionality of the coherent interactions causes the cloud's spectra to depend strongly on its shape, even when the cloud is held at constant densities. These two numerical observations can be predicted analytically by extending the single-photon wave-function model. Lastly, elongating a cloud along the line of laser propagation causes the excitation probability distribution to deviate from the exponential decay predicted by the Beer-Lambert law to the extent where the atoms at the back of the cloud are more excited than the atoms at the front. These calculations are conducted at the low densities relevant to recent experiments.
Testing predictions of the quantum landscape multiverse 2: the exponential inflationary potential
NASA Astrophysics Data System (ADS)
Di Valentino, Eleonora; Mersini-Houghton, Laura
2017-03-01
The 2015 Planck data release tightened the region of the allowed inflationary models. Inflationary models with convex potentials have now been ruled out since they produce a large tensor to scalar ratio. Meanwhile the same data offers interesting hints on possible deviations from the standard picture of CMB perturbations. Here we revisit the predictions of the theory of the origin of the universe from the landscape multiverse for the case of exponential inflation, for two reasons: firstly to check the status of the anomalies associated with this theory, in the light of the recent Planck data; secondly, to search for a counterexample whereby new physics modifications may bring convex inflationary potentials, thought to have been ruled out, back into the region of potentials allowed by data. Using the exponential inflation as an example of convex potentials, we find that the answer to both tests is positive: modifications to the perturbation spectrum and to the Newtonian potential of the universe originating from the quantum entanglement, bring the exponential potential, back within the allowed region of current data; and, the series of anomalies previously predicted in this theory, is still in good agreement with current data. Hence our finding for this convex potential comes at the price of allowing for additional thermal relic particles, equivalently dark radiation, in the early universe.
Single-qubit unitary gates by graph scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumer, Benjamin A.; Underwood, Michael S.; Feder, David L.
2011-12-15
We consider the effects of plane-wave states scattering off finite graphs as an approach to implementing single-qubit unitary operations within the continuous-time quantum walk framework of universal quantum computation. Four semi-infinite tails are attached at arbitrary points of a given graph, representing the input and output registers of a single qubit. For a range of momentum eigenstates, we enumerate all of the graphs with up to n=9 vertices for which the scattering implements a single-qubit gate. As n increases, the number of new unitary operations increases exponentially, and for n>6 the majority correspond to rotations about axes distributed roughly uniformlymore » across the Bloch sphere. Rotations by both rational and irrational multiples of {pi} are found.« less
NASA Astrophysics Data System (ADS)
Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.
2010-12-01
Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.
Not spreading in reverse: The dewetting of a liquid film into a single drop
Edwards, Andrew M. J.; Ledesma-Aguilar, Rodrigo; Newton, Michael I.; Brown, Carl V.; McHale, Glen
2016-01-01
Wetting and dewetting are both fundamental modes of motion of liquids on solid surfaces. They are critically important for processes in biology, chemistry, and engineering, such as drying, coating, and lubrication. However, recent progress in wetting, which has led to new fields such as superhydrophobicity and liquid marbles, has not been matched by dewetting. A significant problem has been the inability to study the model system of a uniform film dewetting from a nonwetting surface to a single macroscopic droplet—a barrier that does not exist for the reverse wetting process of a droplet spreading into a film. We report the dewetting of a dielectrophoresis-induced film into a single equilibrium droplet. The emergent picture of the full dewetting dynamics is of an initial regime, where a liquid rim recedes at constant speed and constant dynamic contact angle, followed by a relatively short exponential relaxation of a spherical cap shape. This sharply contrasts with the reverse wetting process, where a spreading droplet follows a smooth sequence of spherical cap shapes. Complementary numerical simulations and a hydrodynamic model reveal a local dewetting mechanism driven by the equilibrium contact angle, where contact line slip dominates the dewetting dynamics. Our conclusions can be used to understand a wide variety of processes involving liquid dewetting, such as drop rebound, condensation, and evaporation. In overcoming the barrier to studying single film-to-droplet dewetting, our results provide new approaches to fluid manipulation and uses of dewetting, such as inducing films of prescribed initial shapes and slip-controlled liquid retraction. PMID:27704042
Not spreading in reverse: The dewetting of a liquid film into a single drop.
Edwards, Andrew M J; Ledesma-Aguilar, Rodrigo; Newton, Michael I; Brown, Carl V; McHale, Glen
2016-09-01
Wetting and dewetting are both fundamental modes of motion of liquids on solid surfaces. They are critically important for processes in biology, chemistry, and engineering, such as drying, coating, and lubrication. However, recent progress in wetting, which has led to new fields such as superhydrophobicity and liquid marbles, has not been matched by dewetting. A significant problem has been the inability to study the model system of a uniform film dewetting from a nonwetting surface to a single macroscopic droplet-a barrier that does not exist for the reverse wetting process of a droplet spreading into a film. We report the dewetting of a dielectrophoresis-induced film into a single equilibrium droplet. The emergent picture of the full dewetting dynamics is of an initial regime, where a liquid rim recedes at constant speed and constant dynamic contact angle, followed by a relatively short exponential relaxation of a spherical cap shape. This sharply contrasts with the reverse wetting process, where a spreading droplet follows a smooth sequence of spherical cap shapes. Complementary numerical simulations and a hydrodynamic model reveal a local dewetting mechanism driven by the equilibrium contact angle, where contact line slip dominates the dewetting dynamics. Our conclusions can be used to understand a wide variety of processes involving liquid dewetting, such as drop rebound, condensation, and evaporation. In overcoming the barrier to studying single film-to-droplet dewetting, our results provide new approaches to fluid manipulation and uses of dewetting, such as inducing films of prescribed initial shapes and slip-controlled liquid retraction.
A universal procedure for primer labelling of amplicons.
Neilan, B A; Wilton, A N; Jacobs, D
1997-01-01
Detection and visualisation of nucleic acids is integral to genome analyses. Exponential amplification procedures have provided the means for the manipulation of nucleic acid sequences, which were otherwise inaccessible. We describe the development and application of a universal method for the labelling of any PCR product using a single end-labelled primer. Amplification was performed in a single reaction with the resulting amplicon labelled to a high specific activity. The method was adapted to a wide range of PCRs and significantly reduced the expense of such analyses. PMID:9207046
Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models
NASA Astrophysics Data System (ADS)
Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei
2016-06-01
It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.
Anomalous T2 relaxation in normal and degraded cartilage.
Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G
2016-09-01
To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Modeling the degradation kinetics of ascorbic acid.
Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R
2018-06-13
Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.
Comparative Analyses of Creep Models of a Solid Propellant
NASA Astrophysics Data System (ADS)
Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.
2018-05-01
The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.
Zhou, Jingwen; Xu, Zhenghong; Chen, Shouwen
2013-04-01
The thuringiensin abiotic degradation processes in aqueous solution under different conditions, with a pH range of 5.0-9.0 and a temperature range of 10-40°C, were systematically investigated by an exponential decay model and a radius basis function (RBF) neural network model, respectively. The half-lives of thuringiensin calculated by the exponential decay model ranged from 2.72 d to 16.19 d under the different conditions mentioned above. Furthermore, an RBF model with accuracy of 0.1 and SPREAD value 5 was employed to model the degradation processes. The results showed that the model could simulate and predict the degradation processes well. Both the half-lives and the prediction data showed that thuringiensin was an easily degradable antibiotic, which could be an important factor in the evaluation of its safety. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zuthi, Mst Fazana Rahman; Guo, Wenshan; Ngo, Huu Hao; Nghiem, Duc Long; Hai, Faisal I; Xia, Siqing; Li, Jianxin; Li, Jixiang; Liu, Yi
2017-08-01
This study aimed to develop a practical semi-empirical mathematical model of membrane fouling that accounts for cake formation on the membrane and its pore blocking as the major processes of membrane fouling. In the developed model, the concentration of mixed liquor suspended solid is used as a lumped parameter to describe the formation of cake layer including the biofilm. The new model considers the combined effect of aeration and backwash on the foulants' detachment from the membrane. New exponential coefficients are also included in the model to describe the exponential increase of transmembrane pressure that typically occurs after the initial stage of an MBR operation. The model was validated using experimental data obtained from a lab-scale aerobic sponge-submerged membrane bioreactor (MBR), and the simulation of the model agreed well with the experimental findings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sodium 22+ washout from cultured rat cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kino, M.; Nakamura, A.; Hopp, L.
1986-10-01
The washout of Na/sup +/ isotopes from tissues and cells is quite complex and not well defined. To further gain insight into this process, we have studied /sup 22/Na/sup +/ washout from cultured Wistar rat skin fibroblasts and vascular smooth muscle cells (VSMCs). In these preparations, /sup 22/Na/sup +/ washout is described by a general three-exponential function. The exponential factor of the fastest component (k1) and the initial exchange rate constant (kie) of cultured fibroblasts decrease in magnitude in response to incubation in K+-deficient medium or in the presence of ouabain and increase in magnitude when the cells are incubatedmore » in a Ca++-deficient medium. As the magnitude of the kie declines (in the presence of ouabain) to the level of the exponential factor of the middle component (k2), /sup 22/Na/sup +/ washout is adequately described by a two-exponential function. When the kie is further diminished (in the presence of both ouabain and phloretin) to the range of the exponential factor of the slowest component (k3), the washout of /sup 22/Na/sup +/ is apparently monoexponential. Calculations of the cellular Na/sup +/ concentrations, based on the /sup 22/Na/sup +/ activity in the cells at the initiation of the washout experiments, and the medium specific activity agree with atomic absorption spectrometry measurements of the cellular concentration of this ion. Thus, all three components of /sup 22/Na/sup +/ washout from cultured rat cells are of cellular origin. Using the exponential parameters, compartmental analyses of two models (in parallel and in series) with three cellular Na/sup +/ pools were performed. The results indicate that, independent of the model chosen, the relative size of the largest Na+ pool is 92-93% in fibroblasts and approximately 96% in VSMCs. This pool is most likely to represent the cytosol.« less
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
NASA Astrophysics Data System (ADS)
Eibl, Matthias; Karpf, Sebastian; Hakert, Hubertus; Weng, Daniel; Pfeiffer, Tom; Kolb, Jan Philip; Huber, Robert
2017-07-01
Newly developed microscopy methods have the goal to give researches in bio-molecular science a better understanding of processes ongoing on a cellular level. Especially two-photon excited fluorescence (TPEF) microscopy is a readily applied and widespread modality. Compared to one photon fluorescence imaging, it is possible to image not only the surface but also deeper lying structures. Together with fluorescence lifetime imaging (FLIM), which provides information on the chemical composition of a specimen, deeper insights on a molecular level can be gained. However, the need for elaborate light sources for TPEF and speed limitations for FLIM hinder an even wider application. In this contribution, we present a way to overcome this limitations by combining a robust and inexpensive fiber laser for nonlinear excitation with a fast analog digitization method for rapid FLIM imaging. The applied sub nanosecond pulsed laser source is perfectly suited for fiber delivery as typically limiting non-linear effects like self-phase or cross-phase modulation (SPM, XPM) are negligible. Furthermore, compared to the typically applied femtosecond pulses, our longer pulses produce much more fluorescence photons per single shot. In this paper, we show that this higher number of fluorescence photons per pulse combined with a high analog bandwidth detection makes it possible to not only use a single pulse per pixel for TPEF imaging but also to resolve the exponential time decay for FLIM. To evaluate our system, we acquired FLIM images of a dye solution with single exponential behavior to assess the accuracy of our lifetime determination and also FLIM images of a plant stem at a pixel rate of 1 MHz to show the speed performance of our single pulse two-photon FLIM (SP-FLIM) system.
Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U
2010-05-01
Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
Sampling through time and phylodynamic inference with coalescent and birth–death models
Volz, Erik M.; Frost, Simon D. W.
2014-01-01
Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.