Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
Abusam, A; Keesman, K J
2009-01-01
The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.
NASA Astrophysics Data System (ADS)
Elmegreen, Bruce G.
2016-10-01
Exponential radial profiles are ubiquitous in spiral and dwarf Irregular galaxies, but the origin of this structural form is not understood. This talk will review the observations of exponential and double exponential disks, considering both the light and the mass profiles, and the contributions from stars and gas. Several theories for this structure will also be reviewed, including primordial collapse, bar and spiral torques, clump torques, galaxy interactions, disk viscosity and other internal processes of angular momentum exchange, and stellar scattering off of clumpy structure. The only process currently known that can account for this structure in the most theoretically difficult case is stellar scattering off disks clumps. Stellar orbit models suggest that such scattering can produce exponentials even in isolated dwarf irregulars that have no bars or spirals, little shear or viscosity, and profiles that go out too far for the classical Mestel case of primordial collapse with specific angular momentum conservation.
Tosun, İsmail
2012-01-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177
Tosun, Ismail
2012-03-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Observational constraints on varying neutrino-mass cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.
We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
The mechanism of double-exponential growth in hyper-inflation
NASA Astrophysics Data System (ADS)
Mizuno, T.; Takayasu, M.; Takayasu, H.
2002-05-01
Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.
In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.
Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A
2018-04-01
Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.
Numerical Calculation of the Spectrum of the Severe (1%) Lighting Current and Its First Derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C G; Ong, M M; Perkins, M P
2010-02-12
Recently, the direct-strike lighting environment for the stockpile-to-target sequence was updated [1]. In [1], the severe (1%) lightning current waveforms for first and subsequent return strokes are defined based on Heidler's waveform. This report presents numerical calculations of the spectra of those 1% lightning current waveforms and their first derivatives. First, the 1% lightning current models are repeated here for convenience. Then, the numerical method for calculating the spectra is presented and tested. The test uses a double-exponential waveform and its first derivative, which we fit to the previous 1% direct-strike lighting environment from [2]. Finally, the resulting spectra aremore » given and are compared with those of the double-exponential waveform and its first derivative.« less
Exponential quantum spreading in a class of kicked rotor systems near high-order resonances
NASA Astrophysics Data System (ADS)
Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin
2013-11-01
Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.
Li, Hui; Liu, Liying; Lin, Zhili; Wang, Qiwei; Wang, Xiao; Feng, Lishuang
2018-01-22
A new double closed-loop control system with mean-square exponential stability is firstly proposed to optimize the detection accuracy and dynamic response characteristic of the integrated optical resonance gyroscope (IORG). The influence mechanism of optical nonlinear effects on system detection sensitivity is investigated to optimize the demodulation gain, the maximum sensitivity and the linear work region of a gyro system. Especially, we analyze the effect of optical parameter fluctuation on the parameter uncertainty of system, and investigate the influence principle of laser locking-frequency noise on the closed-loop detection accuracy of angular velocity. The stochastic disturbance model of double closed-loop IORG is established that takes the unfavorable factors such as optical effect nonlinearity, disturbed disturbance, optical parameter fluctuation and unavoidable system noise into consideration. A robust control algorithm is also designed to guarantee the mean-square exponential stability of system with a prescribed H ∞ performance in order to improve the detection accuracy and dynamic performance of IORG. The conducted experiment results demonstrate that the IORG has a dynamic response time less than 76us, a long-term bias stability 7.04°/h with an integration time of 10s over one-hour test, and the corresponding bias stability 1.841°/h based on Allan deviation, which validate the effectiveness and usefulness of the proposed detection scheme.
A new look at atmospheric carbon dioxide
NASA Astrophysics Data System (ADS)
Hofmann, David J.; Butler, James H.; Tans, Pieter P.
Carbon dioxide is increasing in the atmosphere and is of considerable concern in global climate change because of its greenhouse gas warming potential. The rate of increase has accelerated since measurements began at Mauna Loa Observatory in 1958 where carbon dioxide increased from less than 1 part per million per year (ppm yr -1) prior to 1970 to more than 2 ppm yr -1 in recent years. Here we show that the anthropogenic component (atmospheric value reduced by the pre-industrial value of 280 ppm) of atmospheric carbon dioxide has been increasing exponentially with a doubling time of about 30 years since the beginning of the industrial revolution (˜1800). Even during the 1970s, when fossil fuel emissions dropped sharply in response to the "oil crisis" of 1973, the anthropogenic atmospheric carbon dioxide level continued increasing exponentially at Mauna Loa Observatory. Since the growth rate (time derivative) of an exponential has the same characteristic lifetime as the function itself, the carbon dioxide growth rate is also doubling at the same rate. This explains the observation that the linear growth rate of carbon dioxide has more than doubled in the past 40 years. The accelerating growth rate is simply the outcome of exponential growth in carbon dioxide with a nearly constant doubling time of about 30 years (about 2%/yr) and appears to have tracked human population since the pre-industrial era.
How bootstrap can help in forecasting time series with more than one seasonal pattern
NASA Astrophysics Data System (ADS)
Cordeiro, Clara; Neves, M. Manuela
2012-09-01
The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.
Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment
NASA Astrophysics Data System (ADS)
Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok
2018-05-01
We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.
Choo, Richard; Klotz, Laurence; Deboer, Gerrit; Danjoux, Cyril; Morton, Gerard C
2004-08-01
To assess the prostate specific antigen (PSA) doubling time of untreated, clinically localized, low-to-intermediate grade prostate carcinoma. A prospective single-arm cohort study has been in progress since November 1995 to assess the feasibility of a watchful-observation protocol with selective delayed intervention for clinically localized, low-to-intermediate grade prostate adenocarcinoma. The PSA doubling time was estimated from a linear regression of ln(PSA) against time, assuming a simple exponential growth model. As of March 2003, 231 patients had at least 6 months of follow-up (median 45) and at least three PSA measurements (median 8, range 3-21). The distribution of the doubling time was: < 2 years, 26 patients; 2-5 years, 65; 5-10 years, 42; 10-20 years, 26; 20-50 years, 16; >50 years, 56. The median doubling time was 7.0 years; 42% of men had a doubling time of >10 years. The doubling time of untreated clinically localized, low-to-intermediate grade prostate cancer varies widely.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Organic/inorganic hybrid synaptic transistors gated by proton conducting methylcellulose films
NASA Astrophysics Data System (ADS)
Wan, Chang Jin; Zhu, Li Qiang; Wan, Xiang; Shi, Yi; Wan, Qing
2016-01-01
The idea of building a brain-inspired cognitive system has been around for several decades. Recently, electric-double-layer transistors gated by ion conducting electrolytes were reported as the promising candidates for synaptic electronics and neuromorphic system. In this letter, indium-zinc-oxide transistors gated by proton conducting methylcellulose electrolyte films were experimentally demonstrated with synaptic plasticity including paired-pulse facilitation and spatiotemporal-correlated dynamic logic. More importantly, a model based on proton-related electric-double-layer modulation and stretched-exponential decay function was proposed, and the theoretical results are in good agreement with the experimentally measured synaptic behaviors.
Organic/inorganic hybrid synaptic transistors gated by proton conducting methylcellulose films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Chang Jin; Wan, Qing, E-mail: wanqing@nju.edu.cn, E-mail: yshi@nju.edu.cn; Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201
The idea of building a brain-inspired cognitive system has been around for several decades. Recently, electric-double-layer transistors gated by ion conducting electrolytes were reported as the promising candidates for synaptic electronics and neuromorphic system. In this letter, indium-zinc-oxide transistors gated by proton conducting methylcellulose electrolyte films were experimentally demonstrated with synaptic plasticity including paired-pulse facilitation and spatiotemporal-correlated dynamic logic. More importantly, a model based on proton-related electric-double-layer modulation and stretched-exponential decay function was proposed, and the theoretical results are in good agreement with the experimentally measured synaptic behaviors.
Topological Defects in Double Exchange Materials and Anomalous Hall Resistance.
NASA Astrophysics Data System (ADS)
Calderón, M. J.; Brey, L.
2000-03-01
Recently it has been proposed that the anomalous Hall effect observed in Double Exchange materials is due to Berry phase effects caused by carrier hopping in a nontrivial spins background (J.Ye et al.) Phys.Rev.Lett. 83, 3737 1999.In order to study this possibility we have performed Monte Carlo simulations of the Double Exchange model and we have computed, as a function of the temperature, the number of topological defects in the system and the internal gauge magnetic field associated with these defects. In the simplest Double Exchange model the gauge magnetic field is random, and its average value is zero. The inclusion in the problem of spin-orbit coupling privileges the opposite direction of the magnetization and an anomalous Hall resistance (AHR) effect arises. We have computed the AHR, and we have obtained its temperature dependence. In agreement with previous experiments we obtain that AHR increases exponentially at low temperature and presents a maximum at a temperature slightly higher than the critical temperature.
Impact of inhomogeneity on SH-type wave propagation in an initially stressed composite structure
NASA Astrophysics Data System (ADS)
Saha, S.; Chattopadhyay, A.; Singh, A. K.
2018-02-01
The present analysis has been made on the influence of distinct form of inhomogeneity in a composite structure comprised of double superficial layers lying over a half-space, on the phase velocity of SH-type wave propagating through it. Propagation of SH-type wave in the said structure has been examined in four distinct cases of inhomogeneity viz. when inhomogeneity in double superficial layer is due to exponential variation in density only (Case I); when inhomogeneity in double superficial layers is due to exponential variation in rigidity only (Case II); when inhomogeneity in double superficial layer is due to exponential variation in rigidity, density and initial stress (Case III) and when inhomogeneity in double superficial layer is due to linear variation in rigidity, density and initial stress (Case IV). Closed-form expression of dispersion relation has been accomplished for all four aforementioned cases through extensive application of Debye asymptotic analysis. Deduced dispersion relations for all the cases are found in well-agreement to the classical Love-wave equation. Numerical computation has been carried out to graphically demonstrate the effect of inhomogeneity parameters, initial stress parameters as well as width ratio associated with double superficial layers in the composite structure for each of the four aforesaid cases on dispersion curve. Meticulous examination of distinct cases of inhomogeneity and initial stress in context of considered problem has been carried out with detailed analysis in a comparative approach.
Understanding Exponential Growth: As Simple as a Drop in a Bucket.
ERIC Educational Resources Information Center
Goldberg, Fred; Shuman, James
1984-01-01
Provides procedures for a simple laboratory activity on exponential growth and its characteristic doubling time. The equipment needed consists of a large plastic bucket, an eyedropper, a stopwatch, an assortment of containers and graduated cylinders, and a supply of water. (JN)
U-shaped, double-tapered, fiber-optic sensor for effective biofilm growth monitoring.
Zhong, Nianbing; Zhao, Mingfu; Li, Yishan
2016-02-01
To monitor biofilm growth on polydimethylsiloxane in a photobioreactor effectively, the biofilm cells and liquids were separated and measured using a sensor with two U-shaped, double-tapered, fiber-optic probes (Sen. and Ref. probes). The probes' Au-coated hemispherical tips enabled double-pass evanescent field absorption. The Sen. probe sensed the cells and liquids inside the biofilm. The polyimide-silica hybrid-film-coated Ref. probe separated the liquids from the biofilm cells and analyzed the liquid concentration. The biofilm structure and active biomass were also examined to confirm the effectiveness of the measurement using a simulation model. The sensor was found to effectively respond to the biofilm growth in the adsorption through exponential phases at thicknesses of 0-536 μm.
NASA Astrophysics Data System (ADS)
Shaharuz Zaman, Azmanira; Aziz, Ahmad Sukri Abd; Ali, Zaileha Md
2017-09-01
The double slips effect on the magnetohydrodynamic boundary layer flow over an exponentially stretching sheet with suction/blowing, radiation, chemical reaction and heat source is presented in this analysis. By using the similarity transformation, the governing partial differential equations of momentum, energy and concentration are transformed into the non-linear ordinary equations. These equations are solved using Runge-Kutta-Fehlberg method with shooting technique in MAPLE software environment. The effects of the various parameter on the velocity, temperature and concentration profiles are graphically presented and discussed.
Cross-Conjugated Nanoarchitectures
2013-08-23
compounds were further evaluated by Lippert –Mataga analysis of the fluorescence solvatochromism and measurement of quantum yields and fluorescence...1.9 1.1 A(mP)2A Cy 0.49 5.5 0.90 0.93 D(Th)2D Cy 0.008 1.1 0.07 9 A(Th)2A Tol 0.014 2.1f 0.07 4.7 a Calculated from Lippert –Mataga plots for...Δfʹ region of the Lippert –Mataga plot. d Double exponential fit: τ1 = 21.5 ns (73%) and τ2 = 3.7 ns (27%). e Double exponential fit: τ1 = 0.85 ns
U-shaped, double-tapered, fiber-optic sensor for effective biofilm growth monitoring
Zhong, Nianbing; Zhao, Mingfu; Li, Yishan
2016-01-01
To monitor biofilm growth on polydimethylsiloxane in a photobioreactor effectively, the biofilm cells and liquids were separated and measured using a sensor with two U-shaped, double-tapered, fiber-optic probes (Sen. and Ref. probes). The probes’ Au-coated hemispherical tips enabled double-pass evanescent field absorption. The Sen. probe sensed the cells and liquids inside the biofilm. The polyimide–silica hybrid-film-coated Ref. probe separated the liquids from the biofilm cells and analyzed the liquid concentration. The biofilm structure and active biomass were also examined to confirm the effectiveness of the measurement using a simulation model. The sensor was found to effectively respond to the biofilm growth in the adsorption through exponential phases at thicknesses of 0–536 μm. PMID:26977344
Atmospheric Transmittance/Radiance: Computer Code LOWTRAN 6
1983-08-01
1966) The refractive index of air, Metrologia 2:12, ൞ -1...sight. For an optical path traversing N layers in an upward or downward direction this process gives N [ 7 A+M A -SCAT --SUN I e,ps+op > AIV < La ... a...for cirrus normal transmittance, r, of the form = exp - (0. 14 LA ) (49) This expression closely duplicates the double exponential model of Davis 4 0 for
Cyberinfrastructure for the NSF Ocean Observatories Initiative
NASA Astrophysics Data System (ADS)
Orcutt, J. A.; Vernon, F. L.; Arrott, M.; Chave, A.; Krueger, I.; Schofield, O.; Glenn, S.; Peach, C.; Nayak, A.
2007-12-01
The Internet today is vastly different than the Internet that we knew even five years ago and the changes that will be evident five years from now, when the NSF Ocean Observatories Initiative (OOI) prototype has been installed, are nearly unpredictable. Much of this progress is based on the exponential growth in capabilities of consumer electronics and information technology; the reality of this exponential behavior is rarely appreciated. For example, the number of transistors on a square cm of silicon will continue to double every 18 months, the density of disk storage will double every year, and network bandwidth will double every eight months. Today's desktop 2TB RAID will be 64TB and the 10Gbps Regional Scale Network fiber optical connection will be running at 1.8Tbps. The same exponential behavior characterizes the future of genome sequencing. The first two sequences of composites of individuals' genes cost tens of millions of dollars in 2001. Dr. Craig Venter just published a more accurate complete human genome (his own) at a cost on the order of 100,000. The J. Craig Venter Institute has provided support for the X Prize for Genomics offering 10M to the first successful sequencing of a human genome for $1,000. It's anticipated that the prize will be won within five years. Major advances in technology that are broadly viewed as disruptive or revolutionary rather than evolutionary will often depend upon the exploitation of exponential expansions in capability. Applications of these ideas to the OOI will be discussed. Specifically, the agile ability to scale cyberinfrastructure commensurate with the exponential growth of sensors, networks and computational capability and demand will be described.
Superionic state in double-layer capacitors with nanoporous electrodes.
Kondrat, S; Kornyshev, A
2011-01-19
In recent experiments (Chmiola et al 2006 Science 313 1760; Largeot et al 2008 J. Am. Chem. Soc. 130 2730) an anomalous increase of the capacitance with a decrease of the pore size of a carbon-based porous electric double-layer capacitor has been observed. We explain this effect by image forces which exponentially screen out the electrostatic interactions of ions in the interior of a pore. Packing of ions of the same sign becomes easier and is mainly limited by steric interactions. We call this state 'superionic' and suggest a simple model to describe it. The model reveals the possibility of a voltage-induced first order transition between a cation(anion)-deficient phase and a cation(anion)-rich phase which manifests itself in a jump of capacitance as a function of voltage.
Ray-theory approach to electrical-double-layer interactions.
Schnitzer, Ory
2015-02-01
A novel approach is presented for analyzing the double-layer interaction force between charged particles in electrolyte solution, in the limit where the Debye length is small compared with both interparticle separation and particle size. The method, developed here for two planar convex particles of otherwise arbitrary geometry, yields a simple asymptotic approximation limited to neither small zeta potentials nor the "close-proximity" assumption underlying Derjaguin's approximation. Starting from the nonlinear Poisson-Boltzmann formulation, boundary-layer solutions describing the thin diffuse-charge layers are asymptotically matched to a WKBJ expansion valid in the bulk, where the potential is exponentially small. The latter expansion describes the bulk potential as superposed contributions conveyed by "rays" emanating normally from the boundary layers. On a special curve generated by the centers of all circles maximally inscribed between the two particles, the bulk stress-associated with the ray contributions interacting nonlinearly-decays exponentially with distance from the center of the smallest of these circles. The force is then obtained by integrating the traction along this curve using Laplace's method. We illustrate the usefulness of our theory by comparing it, alongside Derjaguin's approximation, with numerical simulations in the case of two parallel cylinders at low potentials. By combining our result and Derjaguin's approximation, the interaction force is provided at arbitrary interparticle separations. Our theory can be generalized to arbitrary three-dimensional geometries, nonideal electrolyte models, and other physical scenarios where exponentially decaying fields give rise to forces.
Transfer potentials shape and equilibrate monetary systems
NASA Astrophysics Data System (ADS)
Fischer, Robert; Braun, Dieter
2003-04-01
We analyze a monetary system of random money transfer on the basis of double entry bookkeeping. Without boundary conditions, we do not reach a price equilibrium and violate text-book formulas of economist's quantity theory ( MV= PQ). To match the resulting quantity of money with the model assumption of a constant price, we have to impose boundary conditions. They either restrict specific transfers globally or impose transfers locally. Both connect through a general framework of transfer potentials. We show that either restricted or imposed transfers can shape Gaussian, tent-shape exponential, Boltzmann-exponential, pareto or periodic equilibrium distributions. We derive the master equation and find its general time-dependent approximate solution. An equivalent of quantity theory for random money transfer under the boundary conditions of transfer potentials is given.
Characteristic length of the knotting probability revisited
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2015-09-01
We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(-N/NK), where the estimates of parameter NK are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius rex, i.e. the screening length of double-stranded DNA.
Broadband Spectral Modeling of the Extreme Gigahertz-peaked Spectrum Radio Source PKS B0008-421
NASA Astrophysics Data System (ADS)
Callingham, J. R.; Gaensler, B. M.; Ekers, R. D.; Tingay, S. J.; Wayth, R. B.; Morgan, J.; Bernardi, G.; Bell, M. E.; Bhat, R.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hindson, L.; Hurley-Walker, N.; Jacobs, D. C.; Johnston-Hollitt, M.; Kaplan, D. L.; Kudrayvtseva, N.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Srivani, K. S.; Subrahmanyan, R.; Udaya Shankar, N.; Webster, R. L.; Williams, A.; Williams, C. L.
2015-08-01
We present broadband observations and spectral modeling of PKS B0008-421 and identify it as an extreme gigahertz-peaked spectrum (GPS) source. PKS B0008-421 is characterized by the steepest known spectral slope below the turnover, close to the theoretical limit of synchrotron self-absorption, and the smallest known spectral width of any GPS source. Spectral coverage of the source spans from 0.118 to 22 GHz, which includes data from the Murchison Widefield Array and the wide bandpass receivers on the Australia Telescope Compact Array. We have implemented a Bayesian inference model fitting routine to fit the data with internal free-free absorption (FFA), single- and double-component FFA in an external homogeneous medium, FFA in an external inhomogeneous medium, or single- and double-component synchrotron self-absorption models, all with and without a high-frequency exponential break. We find that without the inclusion of a high-frequency break these models cannot accurately fit the data, with significant deviations above and below the peak in the radio spectrum. The addition of a high-frequency break provides acceptable spectral fits for the inhomogeneous FFA and double-component synchrotron self-absorption models, with the inhomogeneous FFA model statistically favored. The requirement of a high-frequency spectral break implies that the source has ceased injecting fresh particles. Additional support for the inhomogeneous FFA model as being responsible for the turnover in the spectrum is given by the consistency between the physical parameters derived from the model fit and the implications of the exponential spectral break, such as the necessity of the source being surrounded by a dense ambient medium to maintain the peak frequency near the gigahertz region. This implies that PKS B0008-421 should display an internal H i column density greater than 1020 cm-2. The discovery of PKS B0008-421 suggests that the next generation of low radio frequency surveys could reveal a large population of GPS sources that have ceased activity, and that a portion of the ultra-steep-spectrum source population could be composed of these GPS sources in a relic phase.
Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier
2016-01-01
Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.
Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors.
Peterson, Christine; Vannucci, Marina; Karakas, Cemal; Choi, William; Ma, Lihua; Maletić-Savatić, Mirjana
2013-10-01
Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation.
Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors
PETERSON, CHRISTINE; VANNUCCI, MARINA; KARAKAS, CEMAL; CHOI, WILLIAM; MA, LIHUA; MALETIĆ-SAVATIĆ, MIRJANA
2014-01-01
Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation. PMID:24533172
An Optimization of Inventory Demand Forecasting in University Healthcare Centre
NASA Astrophysics Data System (ADS)
Bon, A. T.; Ng, T. K.
2017-01-01
Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.
Recursive least squares estimation and its application to shallow trench isolation
NASA Astrophysics Data System (ADS)
Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.
2003-06-01
In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi
2003-02-01
We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.
Readout models for BaFBr0.85I0.15:Eu image plates
NASA Astrophysics Data System (ADS)
Stoeckl, M.; Solodov, A. A.
2018-06-01
The linearity of the photostimulated luminescence process makes repeated image-plate scanning a viable technique to extract a more dynamic range. In order to obtain a response estimate, two semi-empirical models for the readout fading of an image plate are introduced; they relate the depth distribution of activated photostimulated luminescence centers within an image plate to the recorded signal. Model parameters are estimated from image-plate scan series with BAS-MS image plates and the Typhoon FLA 7000 scanner for the hard x-ray image-plate diagnostic over a collection of experiments providing x-ray energy spectra whose approximate shape is a double exponential.
Extracting volatility signal using maximum a posteriori estimation
NASA Astrophysics Data System (ADS)
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
NASA Technical Reports Server (NTRS)
Ponchak, George E.; Jordan, Jennifer L.; Chevalier, Christine T.
2006-01-01
The characteristics of a double exponentially tapered slot antenna (DETSA) as a function of the radius that the DETSA is conformed to in the longitudinal direction is presented. It is shown through measurements and simulations that the radiation pattern of the conformed antenna rotates in the direction through which the antenna is curved, and that diffraction affects the radiation pattern if the radius of curvature is too small or the frequency too high. The gain of the antenna degrades by only 1 dB if the radius of curvature is large and more than 2 dB for smaller radii. The main effect due to curving the antenna is an increased cross-polarization in the E-plane.
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Quasiclassical treatment of the Auger effect in slow ion-atom collisions
NASA Astrophysics Data System (ADS)
Frémont, F.
2017-09-01
A quasiclassical model based on the resolution of Hamilton equations of motion is used to get evidence for Auger electron emission following double-electron capture in 150-keV N e10 ++He collisions. Electron-electron interaction is taken into account during the collision by using pure Coulombic potential. To make sure that the helium target is stable before the collision, phenomenological potentials for the electron-nucleus interactions that simulate the Heisenberg principle are included in addition to the Coulombic potential. First, single- and double-electron captures are determined and compared with previous experiments and theories. Then, integration time evolution is calculated for autoionizing and nonautoionizing double capture. In contrast with single capture, the number of electrons originating from autoionization slowly increases with integration time. A fit of the calculated cross sections by means of an exponential function indicates that the average lifetime is 4.4 ×10-3a .u . , in very good agreement with the average lifetime deduced from experiments and a classical model introduced to calculate individual angular momentum distributions. The present calculation demonstrates the ability of classical models to treat the Auger effect, which is a pure quantum effect.
A note on large gauge transformations in double field theory
Naseer, Usman
2015-06-03
Here, we give a detailed proof of the conjecture by Hohm and Zwiebach in double field theory. Our result implies that their proposal for large gauge transformations in terms of the Jacobian matrix for coordinate transformations is, as required, equivalent to the standard exponential map associated with the generalized Lie derivative along a suitable parameter.
NASA Astrophysics Data System (ADS)
Mu, G. Y.; Mi, X. Z.; Wang, F.
2018-01-01
The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.
Doubling Time for Nonexponential Families of Functions
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2010-01-01
One special characteristic of any exponential growth or decay function f(t) = Ab[superscript t] is its unique doubling time or half-life, each of which depends only on the base "b". The half-life is used to characterize the rate of decay of any radioactive substance or the rate at which the level of a medication in the bloodstream decays as it is…
Analytical model for release calculations in solid thin-foils ISOL targets
NASA Astrophysics Data System (ADS)
Egoriti, L.; Boeckx, S.; Ghys, L.; Houngbo, D.; Popescu, L.
2016-10-01
A detailed analytical model has been developed to simulate isotope-release curves from thin-foils ISOL targets. It involves the separate modeling of diffusion and effusion inside the target. The former has been modeled using both first and second Fick's law. The latter, effusion from the surface of the target material to the end of the ionizer, was simulated with the Monte Carlo code MolFlow+. The calculated delay-time distribution for this process was then fitted using a double-exponential function. The release curve obtained from the convolution of diffusion and effusion shows good agreement with experimental data from two different target geometries used at ISOLDE. Moreover, the experimental yields are well reproduced when combining the release fraction with calculated in-target production.
Analytical model of coincidence resolving time in TOF-PET
NASA Astrophysics Data System (ADS)
Wieczorek, H.; Thon, A.; Dey, T.; Khanin, V.; Rodnyi, P.
2016-06-01
The coincidence resolving time (CRT) of scintillation detectors is the parameter determining noise reduction in time-of-flight PET. We derive an analytical CRT model based on the statistical distribution of photons for two different prototype scintillators. For the first one, characterized by single exponential decay, CRT is proportional to the decay time and inversely proportional to the number of photons, with a square root dependence on the trigger level. For the second scintillator prototype, characterized by exponential rise and decay, CRT is proportional to the square root of the product of rise time and decay time divided by the doubled number of photons, and it is nearly independent of the trigger level. This theory is verified by measurements of scintillation time constants, light yield and CRT on scintillator sticks. Trapping effects are taken into account by defining an effective decay time. We show that in terms of signal-to-noise ratio, CRT is as important as patient dose, imaging time or PET system sensitivity. The noise reduction effect of better timing resolution is verified and visualized by Monte Carlo simulation of a NEMA image quality phantom.
Systematic strategies for the third industrial accident prevention plan in Korea.
Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung
2012-01-01
To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.
Bernard, Olivier; Alata, Olivier; Francaux, Marc
2006-03-01
Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
Exponential propagators for the Schrödinger equation with a time-dependent potential.
Bader, Philipp; Blanes, Sergio; Kopylov, Nikita
2018-06-28
We consider the numerical integration of the Schrödinger equation with a time-dependent Hamiltonian given as the sum of the kinetic energy and a time-dependent potential. Commutator-free (CF) propagators are exponential propagators that have shown to be highly efficient for general time-dependent Hamiltonians. We propose new CF propagators that are tailored for Hamiltonians of the said structure, showing a considerably improved performance. We obtain new fourth- and sixth-order CF propagators as well as a novel sixth-order propagator that incorporates a double commutator that only depends on coordinates, so this term can be considered as cost-free. The algorithms require the computation of the action of exponentials on a vector similar to the well-known exponential midpoint propagator, and this is carried out using the Lanczos method. We illustrate the performance of the new methods on several numerical examples.
2013-01-01
The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A.2012, 109, 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k0–1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation. PMID:24348206
Banushkina, Polina V; Krivov, Sergei V
2013-12-10
The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A. 2012 , 109 , 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k 0 -1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation.
Improvements to Shortwave Absorption in the GFDL General Circulation Model Radiation Code
NASA Astrophysics Data System (ADS)
Freidenreich, S.
2015-12-01
The multiple-band shortwave radiation parameterization used in the GFDL general circulation models is being revised to better simulate the disposition of the solar flux in comparison with line-by-line+doubling-adding reference calculations based on the HITRAN 2012 catalog. For clear skies, a notable deficiency in the older formulation is an underestimate of atmospheric absorption. The two main reasons for this is the neglecting of both H2O absorption for wavenumbers < 2500 cm-1 and the O2 continuum. Further contributions to this underestimate are due to neglecting the effects of CH4, N2O and stratospheric H2O absorption. These issues are addressed in the revised formulation and result in globally average shortwave absorption increasing from 74 to 78 Wm-2. The number of spectral bands considered remains the same (18), but the number of pseudomonochromatic intervals (based mainly on the exponential-sum-fit technique) for the determination of H2O absorption is increased from 38 to 74, allowing for more accuracy in its simulation. Also, CO2 absorption is now determined by the exponential-sum-fit technique, replacing an algebraic absorptivity expression in the older parameterization; this improves the simulation of the heating in the stratosphere. Improvements to the treatment of multiple scattering are currently being tested. This involves replacing the current algorithm, which consists of the two stream delta-Eddington, with a four stream algorithm. Initial results show that in most, but not all cases these produce better agreement with the reference doubling-adding results.
Profiler - A Fast and Versatile New Program for Decomposing Galaxy Light Profiles
NASA Astrophysics Data System (ADS)
Ciambur, Bogdan C.
2016-12-01
I introduce Profiler, a user-friendly program designed to analyse the radial surface brightness profiles of galaxies. With an intuitive graphical user interface, Profiler can accurately model galaxies of a broad range of morphological types, with various parametric functions routinely employed in the field (Sérsic, core-Sérsic, exponential, Gaussian, Moffat, and Ferrers). In addition to these, Profiler can employ the broken exponential model for disc truncations or anti-truncations, and two special cases of the edge-on disc model: along the disc's major or minor axis. The convolution of (circular or elliptical) models with the point spread function is performed in 2D, and offers a choice between Gaussian, Moffat or a user-provided profile for the point spread function. Profiler is optimised to work with galaxy light profiles obtained from isophotal measurements, which allow for radial gradients in the geometric parameters of the isophotes, and are thus often better at capturing the total light than 2D image-fitting programs. Additionally, the 1D approach is generally less computationally expensive and more stable. I demonstrate Profiler's features by decomposing three case-study galaxies: the cored elliptical galaxy NGC 3348, the nucleated dwarf Seyfert I galaxy Pox 52, and NGC 2549, a double-barred galaxy with an edge-on, truncated disc.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue
Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.
2004-01-01
The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455
Plasmids as stochastic model systems
NASA Astrophysics Data System (ADS)
Paulsson, Johan
2003-05-01
Plasmids are self-replicating gene clusters present in on average 2-100 copies per bacterial cell. To reduce random fluctuations and thereby avoid extinction, they ubiquitously autoregulate their own synthesis using negative feedback loops. Here I use van Kampen's Ω-expansion for a two-dimensional model of negative feedback including plasmids and ther replication inhibitors. This analytically summarizes the standard perspective on replication control -- including the effects of sensitivity amplification, exponential time-delays and noisy signaling. I further review the two most common molecular sensitivity mechanisms: multistep control and cooperativity. Finally, I discuss more controversial sensitivity schemes, such as noise-enhanced sensitivity, the exploitation of small-number combinatorics and double-layered feedback loops to suppress noise in disordered environments.
NMR investigation of the short-chain ionic surfactant-water systems.
Popova, M V; Tchernyshev, Y S; Michel, D
2004-02-03
The structure and dynamics of surfactant molecules [CH3(CH2)7COOK] in heavy water solutions were investigated by 1H and 2H NMR. A double-exponential attenuation of the spin-echo amplitude in a Carr-Purcell-Meiboom-Gill experiment was found. We expect correspondence to both bounded and monomeric states. At high concentrations in the NMR self-diffusion measurements also a double-exponential decay of the spin-echo signal versus the square of the dc magnetic gradient was observed. The slow component of the diffusion process is caused by micellar aggregates, while the fast component is the result of the self-diffusion of the monomers through the micelles. The self-diffusion studies indicate that the form of micelles changes with increasing total surfactant concentration. The critical temperature range for self-association is reflected in the 1H transverse relaxation.
Nuclear counting filter based on a centered Skellam test and a double exponential smoothing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan
2015-07-01
Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activitymore » occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)« less
A U-shaped linear ultrasonic motor using longitudinal vibration transducers with double feet.
Liu, Yingxiang; Liu, Junkao; Chen, Weishan; Shi, Shengjun
2012-05-01
A U-shaped linear ultrasonic motor using longitudinal vibration transducers with double feet was proposed in this paper. The proposed motor contains a horizontal transducer and two vertical transducers. The horizontal transducer includes two exponential shape horns located at the leading ends, and each vertical transducer contains one exponential shape horn. The horns of the horizontal transducer and the vertical transducer intersect at the tip ends where the driving feet are located. Longitudinal vibrations are superimposed in the motor and generate elliptical motions at the driving feet. The two vibration modes of the motor are discussed, and the motion trajectories of driving feet are deduced. By adjusting the structural parameters, the resonance frequencies of two vibration modes were degenerated. A prototype motor was fabricated and measured. Typical output of the prototype is no-load speed of 854 mm/s and maximum thrust force of 40 N at a voltage of 200 V(rms).
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
NASA Astrophysics Data System (ADS)
Ismail, A.; Hassan, Noor I.
2013-09-01
Cancer is one of the principal causes of death in Malaysia. This study was performed to determine the pattern of rate of cancer deaths at a public hospital in Malaysia over an 11 year period from year 2001 to 2011, to determine the best fitted model of forecasting the rate of cancer deaths using Univariate Modeling and to forecast the rates for the next two years (2012 to 2013). The medical records of the death of patients with cancer admitted at this Hospital over 11 year's period were reviewed, with a total of 663 cases. The cancers were classified according to 10th Revision International Classification of Diseases (ICD-10). Data collected include socio-demographic background of patients such as registration number, age, gender, ethnicity, ward and diagnosis. Data entry and analysis was accomplished using SPSS 19.0 and Minitab 16.0. The five Univariate Models used were Naïve with Trend Model, Average Percent Change Model (ACPM), Single Exponential Smoothing, Double Exponential Smoothing and Holt's Method. The overall 11 years rate of cancer deaths showed that at this hospital, Malay patients have the highest percentage (88.10%) compared to other ethnic groups with males (51.30%) higher than females. Lung and breast cancer have the most number of cancer deaths among gender. About 29.60% of the patients who died due to cancer were aged 61 years old and above. The best Univariate Model used for forecasting the rate of cancer deaths is Single Exponential Smoothing Technique with alpha of 0.10. The forecast for the rate of cancer deaths shows a horizontally or flat value. The forecasted mortality trend remains at 6.84% from January 2012 to December 2013. All the government and private sectors and non-governmental organizations need to highlight issues on cancer especially lung and breast cancers to the public through campaigns using mass media, media electronics, posters and pamphlets in the attempt to decrease the rate of cancer deaths in Malaysia.
Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun
2014-02-01
We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
Finite-time singularity signature of hyperinflation
NASA Astrophysics Data System (ADS)
Sornette, D.; Takayasu, H.; Zhou, W.-X.
2003-07-01
We present a novel analysis extending the recent work of Mizuno et al. (Physica A 308 (2002) 411) on the hyperinflations of Germany (1920/1/1-1923/11/1), Hungary (1945/4/30-1946/7/15), Brazil (1969-1994), Israel (1969-1985), Nicaragua (1969-1991), Peru (1969-1990) and Bolivia (1969-1985). On the basis of a generalization of Cagan's model of inflation based on the mechanism of “inflationary expectation” of positive feedbacks between realized growth rate and people's expected growth rate, we find that hyperinflations can be characterized by a power law singularity culminating at a critical time tc. Mizuno et al.'s double-exponential function can be seen as a discrete time-step approximation of our more general non-linear ODE formulation of the price dynamics which exhibits a finite-time singular behavior. This extension of Cagan's model, which makes natural the appearance of a critical time tc, has the advantage of providing a well-defined end of the clearly unsustainable hyperinflation regime. We find an excellent and reliable agreement between theory and data for Germany, Hungary, Peru and Bolivia. For Brazil, Israel and Nicaragua, the super-exponential growth seems to be already contaminated significantly by the existence of a cross-over to a stationary regime.
Castellanos-Barliza, Jeiner; León Peláez, Juan Diego
2011-03-01
Several factors control the decomposition in terrestrial ecosystems such as humidity, temperature, quality of litter and microbial activity. We investigated the effects of rainfall and soil plowing prior to the establishment of Acacia mangium plantations, using the litterbag technique, during a six month period, in forests plantations in Bajo Cauca region, Colombia. The annual decomposition constants (k) of simple exponential model, oscillated between 1.24 and 1.80, meanwhile k1 y k2 decomposition constants of double exponential model were 0.88-1.81 and 0.58-7.01. At the end of the study, the mean residual dry matter (RDM) was 47% of the initial value for the three sites. We found a slow N, Ca and Mg release pattern from the A. mangium leaf litter, meanwhile, phosphorus (P) showed a dominant immobilization phase, suggesting its low availability in soils. Chemical leaf litter quality parameters (e.g. N and P concentrations, C/N, N/P ratios and phenols content) showed an important influence on decomposition rates. The results of this study indicated that rainfall plays an important role on the decomposition process, but not soil plowing.
Modelling the effect of hydration on skin conductivity.
Davies, L; Chappell, P; Melvin, T
2017-08-01
Electrical signals are recorded from and sent into the body via the skin in a number of applications. In practice, skin is often hydrated with liquids having different conductivities so a model was produced in order to determine the relationship between skin impedance and conductivity. A model representing the skin was subjected to a variety of electrical signals. The parts of the model representing the stratum corneum were given different conductivities to represent different levels of hydration. The overall impedance and conductivity of the cells did not vary at frequencies below 40 kHz. Above 40 kHz, levels of increased conductivity caused the overall impedance to decrease. The variation in impedance with conductivity between 5 and 50 mSm -1 can be modelled quadratically while variation in impedance with conductivity between 5 and 5000 mSm -1 can be modelled with a double exponential decay. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Šefl, Martin, E-mail: martin.sefl@gmail.com; Kyriakou, Ioanna; Emfietzoglou, Dimitris, E-mail: demfietz@cc.uoi.gr
Purpose: To study theoretically the impact on cell survival of the radionuclide uptake rate inside tumor cells for a single administration of a radiopharmaceutical. Methods: The instantaneous-uptake model of O’Donoghue [“The impact of tumor cell proliferation in radioimmunotherapy,” Cancer 73, 974–980 (1994)] for a proliferating cell population irradiated by an exponentially decreasing dose-rate is here extended to allow for the monoexponential uptake of the radiopharmaceutical by the targeted cells. The time derivative of the survival curve is studied in detail deducing an expression for the minimum of the surviving fraction and the biologically effective dose (BED). Results: Surviving fractions aremore » calculated over a parameter range that is clinically relevant and broad enough to establish general trends. Specifically, results are presented for the therapy radionuclides Y-90, I-131, and P-32, assuming uptake half-times 1–24 h, extrapolated initial dose-rates 0.5–1 Gy h{sup −1}, and a biological clearance half-life of seven days. Representative radiobiological parameters for radiosensitive and rapidly proliferating tumor cells are used, with cell doubling time equal to 2 days and α-coefficient equal to 0.3 and 0.5 Gy{sup −1}. It is shown that neglecting the uptake phase of the radiopharmaceutical (i.e., assuming instantaneous-uptake) results in a sizeable over-estimation of cell-kill (i.e., under-estimation of cell survival) even for uptake half-times of only a few hours. The differences between the exponential-uptake model and the instantaneous-uptake model become larger for high peak dose-rates, slow uptakes, and (slightly) for long-lived radionuclides. Moreover, the sensitivity of the survival curve on the uptake model was found to be higher for the tumor cells with the larger α-coefficient. Conclusions: The exponential-uptake rate of the radiopharmaceutical inside targeted cells appears to have a considerable effect on the survival of a proliferating cell population and might need to be considered in radiobiological models of tumor cell-kill in radionuclide therapy.« less
Effects of resonant magnetic perturbation on the triggering and the evolution of double-tearing mode
NASA Astrophysics Data System (ADS)
Wang, L.; Lin, W. B.; Wang, X. Q.
2018-02-01
The effects of resonant magnetic perturbation on the triggering and the evolution of the double-tearing mode are investigated by using nonlinear magnetohydrodynamics simulations in a slab geometry. It is found that the double-tearing mode can be destabilized by boundary magnetic perturbation. Moreover, the mode has three typical development stages before it reaches saturation: the linear stable stage, the linear-growth stage, and the exponential-growth stage. The onset and growth of the double-tearing mode significantly depend on the boundary magnetic perturbations, particularly in the early development stage of the mode. The influences of the magnetic perturbation amplitude on the mode for different separations of the two rational surfaces are also discussed.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
NASA Technical Reports Server (NTRS)
Wolf, M.; Noel, G. T.; Stirn, R. J.
1977-01-01
Difficulties in relating observed current-voltage characteristics of individual silicon solar cells to their physical and material parameters were underscored by the unexpected large changes in the current-voltage characteristics telemetered back from solar cells on the ATS-1 spacecraft during their first year in synchronous orbit. Depletion region recombination was studied in cells exhibiting a clear double-exponential dark characteristic by subjecting the cells to proton irradiation. A significant change in the saturation current, an effect included in the Sah, Noyce, Shockley formulation of diode current resulting from recombination in the depletion region, was caused by the introduction of shallow levels in the depletion region by the proton irradiation. This saturation current is not attributable only to diffusion current from outside the depletion region and only its temperature dependence can clarify its origin. The current associated with the introduction of deep-lying levels did not change significantly in these experiments.
NASA Astrophysics Data System (ADS)
Kawabata, Kiyoshi
2016-12-01
This work shows that it is possible to calculate numerical values of the Chandrasekhar H-function for isotropic scattering at least with 15-digit accuracy by making use of the double exponential formula (DE-formula) of Takahashi and Mori (Publ. RIMS, Kyoto Univ. 9:721, 1974) instead of the Gauss-Legendre quadrature employed in the numerical scheme of Kawabata and Limaye (Astrophys. Space Sci. 332:365, 2011) and simultaneously taking a precautionary measure to minimize the effects due to loss of significant digits particularly in the cases of near-conservative scattering and/or errors involved in returned values of library functions supplied by compilers in use. The results of our calculations are presented for 18 selected values of single scattering albedo π0 and 22 values of an angular variable μ, the cosine of zenith angle θ specifying the direction of radiation incident on or emergent from semi-infinite media.
Artificial Neural Network versus Linear Models Forecasting Doha Stock Market
NASA Astrophysics Data System (ADS)
Yousif, Adil; Elfaki, Faiz
2017-12-01
The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.
Biases in simulation of the rice phenology models when applied in warmer climates
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, T.; Yang, X.; Simelton, E.
2015-12-01
The current model inter-comparison studies highlight the difference in projections between crop models when they are applied to warmer climates, but these studies do not provide results on how the accuracy of the models would change in these projections because the adequate observations under largely diverse growing season temperature (GST) are often unavailable. Here, we investigate the potential changes in the accuracy of rice phenology models when these models were applied to a significantly warmer climate. We collected phenology data from 775 trials with 19 cultivars in 5 Asian countries (China, India, Philippines, Bangladesh and Thailand). Each cultivar encompasses the phenology observations under diverse GST regimes. For a given rice cultivar in different trials, the GST difference reaches 2.2 to 8.2°C, which allows us to calibrate the models under lower GST and validate under higher GST (i.e., warmer climates). Four common phenology models representing major algorithms on simulations of rice phenology, and three model calibration experiments were conducted. The results suggest that the bilinear and beta models resulted in gradually increasing phenology bias (Figure) and double yield bias per percent increase in phenology bias, whereas the growing-degree-day (GDD) and exponential models maintained a comparatively constant bias when applied in warmer climates (Figure). Moreover, the bias of phenology estimated by the bilinear and beta models did not reduce with increase in GST when all data were used to calibrate models. These suggest that variations in phenology bias are primarily attributed to intrinsic properties of the respective phenology model rather than on the calibration dataset. Therefore we conclude that using the GDD and exponential models has more chances of predicting rice phenology correctly and thus, production under warmer climates, and result in effective agricultural strategic adaptation to and mitigation of climate change.
Growth and differentiation of human lens epithelial cells in vitro on matrix
NASA Technical Reports Server (NTRS)
Blakely, E. A.; Bjornstad, K. A.; Chang, P. Y.; McNamara, M. P.; Chang, E.; Aragon, G.; Lin, S. P.; Lui, G.; Polansky, J. R.
2000-01-01
PURPOSE: To characterize the growth and maturation of nonimmortalized human lens epithelial (HLE) cells grown in vitro. METHODS: HLE cells, established from 18-week prenatal lenses, were maintained on bovine corneal endothelial (BCE) extracellular matrix (ECM) in medium supplemented with basic fibroblast growth factor (FGF-2). The identity, growth, and differentiation of the cultures were characterized by karyotyping, cell morphology, and growth kinetics studies, reverse transcription-polymerase chain reaction (RT-PCR), immunofluorescence, and Western blot analysis. RESULTS: HLE cells had a male, human diploid (2N = 46) karyotype. The population-doubling time of exponentially growing cells was 24 hours. After 15 days in culture, cell morphology changed, and lentoid formation was evident. Reverse transcription-polymerase chain reaction (RT-PCR) indicated expression of alphaA- and betaB2-crystallin, fibroblast growth factor receptor 1 (FGFR1), and major intrinsic protein (MIP26) in exponential growth. Western analyses of protein extracts show positive expression of three immunologically distinct classes of crystallin proteins (alphaA-, alphaB-, and betaB2-crystallin) with time in culture. By Western blot analysis, expression of p57(KIP2), a known marker of terminally differentiated fiber cells, was detectable in exponential cultures, and levels increased after confluence. MIP26 and gamma-crystallin protein expression was detected in confluent cultures, by using immunofluorescence, but not in exponentially growing cells. CONCLUSIONS: HLE cells can be maintained for up to 4 months on ECM derived from BCE cells in medium containing FGF-2. With time in culture, the cells demonstrate morphologic characteristics of, and express protein markers for, lens fiber cell differentiation. This in vitro model will be useful for investigations of radiation-induced cataractogenesis and other studies of lens toxicity.
NASA Technical Reports Server (NTRS)
Wolf, M.; Noel, G. T.; Stirn, R. J.
1976-01-01
A theoretical analysis is presented of certain peculiarities of the current-voltage characteristics of silicon solar cells, involving high values of the empirical constant A in the diode equation for a p-n junction. An attempt was made in a lab experiment to demonstrate that the saturation current which is associated with the exponential term qV/A2kT of the I-V characteristic, with A2 roughly equal to 2, originates in the space charge region and that it can be increased, as observed on ATS-1 cells, by the introduction of additional defects through low energy proton irradiation. It was shown that the proton irradiation introduces defects into the space charge region which give rise to a recombination current from this region, although the I-V characteristic is, in this case, dominated by an exponential term which has A = 1.
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degroote, M.; Henderson, T. M.; Zhao, J.
We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less
Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio
2016-05-19
Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
Controllability of a multichannel system
NASA Astrophysics Data System (ADS)
Ivanov, Sergei A.; Wang, Jun Min
2018-02-01
We consider the system consisting of K coupled acoustic channels with the different sound velocities cj. Channels are interacting at any point via the pressure and its time derivatives. Using the moment approach and the theory of exponential families with vector coefficients we establish two controllability results: the system is exactly controllable if (i) the control uj in the jth channel acts longer than the double travel time of a wave from the start to the end of the j-th channel; (ii) all controls uj act more than or equal to the maximal double travel time.
1978-06-01
HDL). The locus of electrical centers of hydrated ions in contact with the electrode surface is known as the outer Helmholtz plane ( OHP ) while the...and then a more Crdual exponential decay in the diffuse double layer. The difference in potential between the OHP and the bulk electrolyte, i.e., the...rnntribution of the diffuse double layer, is called the electrokinetic or iC 275 (a) Wc IHP OHP GCL- BULK + + ELECTRO YTE + + + + +G+ + eS+ J f -A -A
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
Ferromagnetic resonance in coupled permalloy double films separated by a Cu interlayer
NASA Astrophysics Data System (ADS)
Maksymowicz, A. Z.; Whiting, J. S. S.; Watson, M. L.; Chambers, A.
1991-03-01
Ferromagnetic resonance (FMR) at 16 GHz was used to study the magnetic coupling between two-layers of permalloy separated by a nonmagnetic Cu layer. Samples with the same thickness (600 Å) of both permalloy layers were deposited from e-gun sources onto glass substrates in UHV. The thickness d of the Cu interlayer was varied from 5 to 37 Å. The exchange coupling energy ( E = - KM1· M2) model was used to describe the interaction between the two magnetic layers. It was found from the ferromagnetic resonance data in the perpendicular configuration that K( d) follows an exponential law, K = K0e - d/ q, where q = 9.3 Å.
An Analysis of Wave Interactions in Swept-Wing Flows
NASA Technical Reports Server (NTRS)
Reed, H. L.
1984-01-01
Crossflow instabilities dominate disturbance growth in the leading-edge region of swept wings. Streamwise vortices in a boundary layer strongly influence the behavior of other disturbances. Amplification of crossflow vortices near the leading edge produces a residual spanwise nonuniformity in the mid-chord regions where Tollmien-Schlichting (T-S) waves are strongly amplified. Should the T-S wave undergo double-exponential growth because of this effect, the usual transition prediction methods would fail. The crossflow/Tollmien-Schlichting wave interaction was modeled as a secondary instability. The effects of suction are included, and different stability criteria are examined. The results are applied to laminar flow control wings characteristic of energy-efficient aircraft designs.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
A mixing evolution model for bidirectional microblog user networks
NASA Astrophysics Data System (ADS)
Yuan, Wei-Guo; Liu, Yun
2015-08-01
Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
Computerized Method for the Generation of Molecular Transmittance Functions in the Infrared Region.
1979-12-31
exponent of the double exponential function were ’bumpy’ for some cases. Since the nature of the transmittance does not predict this behavior, we...T ,IS RECOMPUTED FOR THE ORIGIONAL DATA *USING THE PIECEWISE- ANALITICAL TRANSMISSION FUNCTION.’//20X, *’STANDARD DEVIATIONS BETWEEN THE ACTUAL TAU
ERIC Educational Resources Information Center
Smith, Gary R.
This publication contains two miniunits to help students in grades 7-12 build skills for the future. The exercises can also be adapted for use in grades 4-6. Each of the miniunits contains several exercises to build specific skills. Miniunit One, "The Arithmetic of Growth," deals with two concepts--exponential growth and doubling time. These two…
Global exponential stability of octonion-valued neural networks with leakage delay and mixed delays.
Popa, Călin-Adrian
2018-06-08
This paper discusses octonion-valued neural networks (OVNNs) with leakage delay, time-varying delays, and distributed delays, for which the states, weights, and activation functions belong to the normed division algebra of octonions. The octonion algebra is a nonassociative and noncommutative generalization of the complex and quaternion algebras, but does not belong to the category of Clifford algebras, which are associative. In order to avoid the nonassociativity of the octonion algebra and also the noncommutativity of the quaternion algebra, the Cayley-Dickson construction is used to decompose the OVNNs into 4 complex-valued systems. By using appropriate Lyapunov-Krasovskii functionals, with double and triple integral terms, the free weighting matrix method, and simple and double integral Jensen inequalities, delay-dependent criteria are established for the exponential stability of the considered OVNNs. The criteria are given in terms of complex-valued linear matrix inequalities, for two types of Lipschitz conditions which are assumed to be satisfied by the octonion-valued activation functions. Finally, two numerical examples illustrate the feasibility, effectiveness, and correctness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Recurrence intervals for the closure of the Dutch Maeslant surge barrier
NASA Astrophysics Data System (ADS)
van den Brink, Henk W.; de Goederen, Sacha
2017-09-01
The Dutch Maeslant Barrier, a movable surge barrier in the mouth of the river Rhine, closes when there is a surge in the North Sea and the water level in the river at Rotterdam exceeds 3 m above mean sea level. An important aspect of the failure probability is that the barrier might get damaged during a closure and that, within the time needed for repair, a second critical storm surge may occur. With an estimated closure frequency of once in 10 years, the question of how often the barrier has to be closed twice within one month arises.Instead of tackling this problem by the application of statistical models on the (short) observational series, we solve the problem by combining the surge model WAQUA/DCSMv5 with the output of all seasonal forecasts of the European Centre of Medium-Range Weather Forecasting (ECMWF) in the period 1981-2015, whose combination cumulates in a pseudo-observational series of more than 6000 years.We show that the Poisson process model leads to wrong results as it neglects the temporal correlations that are present on daily, weekly and monthly timescales.By counting the number of double events over a threshold of 2.5 m and assuming that the number of events is exponentially related to the threshold, it is found that two closures occur on average once in 150 years within a month, and once in 330 years within a week. The large uncertainty in these recurrence intervals of more than a factor of two is caused by the sensitivity of the results to the Gumbel parameters of the observed record, which are used for bias correction.Sea level rise has a significant impact on the recurrence time for both single and double closures. The recurrence time of single closures doubles with every 18 cm mean sea level rise (assuming that other influences remain unchanged) and double closures double with every 10 cm rise. This implies a 3-14 times higher probability of a double closure for a 15-40 cm sea level rise in 2050 (according to the KNMI climate scenarios).
Hosseinzadeh, M; Ghoreishi, M; Narooei, K
2016-06-01
In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Stochastic Super-Exponential Growth Model for Population Dynamics
NASA Astrophysics Data System (ADS)
Avila, P.; Rekker, A.
2010-11-01
A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.
Force Measurements of Single and Double Barrier DBD Plasma Actuators in Quiescent Air
NASA Technical Reports Server (NTRS)
Hoskinson, Alan R.; Hershkowitz, Noah; Ashpis, David E.
2008-01-01
We have performed measurements of the force induced by both single (one electrode insulated) and double (both electrodes insulated) dielectric barrier discharge plasma actuators in quiescent air. We have shown that, for single barrier actuators, as the electrode diameter decreased below those values previously studied the induced Force increases exponentially rather than linearly. This behavior has been experimentally verified using two different measurement techniques: stagnation probe measurements of the induced flow velocity and direct measurement of the force using an electronic balance. In addition, we have shown the the induced force is independent of the material used for the exposed electrode. The same techniques have shown that the induced force of a double barrier actuator increases with decreasing narrow electrode diameter.
Failure prediction using machine learning and time series in optical network.
Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo
2017-08-07
In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.
Cell Size Regulation in Bacteria
NASA Astrophysics Data System (ADS)
Amir, Ariel
2014-05-01
Various bacteria such as the canonical gram negative Escherichia coli or the well-studied gram positive Bacillus subtilis divide symmetrically after they approximately double their volume. Their size at division is not constant, but is typically distributed over a narrow range. Here, we propose an analytically tractable model for cell size control, and calculate the cell size and interdivision time distributions, as well as the correlations between these variables. We suggest ways of extracting the model parameters from experimental data, and show that existing data for E. coli supports partial size control, and a particular explanation: a cell attempts to add a constant volume from the time of initiation of DNA replication to the next initiation event. This hypothesis accounts for the experimentally observed correlations between mother and daughter cells as well as the exponential dependence of size on growth rate.
Fernández, José M; Nieto, M Aurora; López-de-Sá, Esther G; Gascó, Gabriel; Méndez, Ana; Plaza, César
2014-06-01
Semi-arid soils cover a significant area of Earth's land surface and typically contain large amounts of inorganic C. Determining the effects of biochar additions on CO2 emissions from semi-arid soils is therefore essential for evaluating the potential of biochar as a climate change mitigation strategy. Here, we measured the CO2 that evolved from semi-arid calcareous soils amended with biochar at rates of 0 and 20tha(-1) in a full factorial combination with three different fertilizers (mineral fertilizer, municipal solid waste compost, and sewage sludge) applied at four rates (equivalent to 0, 75, 150, and 225kg potentially available Nha(-1)) during 182 days of aerobic incubation. A double exponential model, which describes cumulative CO2 emissions from two active soil C compartments with different turnover rates (one relatively stable and the other more labile), was found to fit very well all the experimental datasets. In general, the organic fertilizers increased the size and decomposition rate of the stable and labile soil C pools. In contrast, biochar addition had no effects on any of the double exponential model parameters and did not interact with the effects ascribed to the type and rate of fertilizer. After 182 days of incubation, soil organic and microbial biomass C contents tended to increase with increasing the application rates of organic fertilizer, especially of compost, whereas increasing the rate of mineral fertilizer tended to suppress microbial biomass. Biochar was found to increase both organic and inorganic C contents in soil and not to interact with the effects of type and rate of fertilizer on C fractions. As a whole, our results suggest that the use of biochar as enhancer of semi-arid soils, either alone or combined with mineral and organic fertilizers, is unlikely to increase abiotic and biotic soil CO2 emissions. Copyright © 2014 Elsevier B.V. All rights reserved.
Rectal temperature-based death time estimation in infants.
Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato
2016-03-01
In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.
2016-01-01
Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785
1996-09-16
approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in
Small-Scale, Local Area, and Transitional Millimeter Wave Propagation for 5G Communications
NASA Astrophysics Data System (ADS)
Rappaport, Theodore S.; MacCartney, George R.; Sun, Shu; Yan, Hangsong; Deng, Sijia
2017-12-01
This paper studies radio propagation mechanisms that impact handoffs, air interface design, beam steering, and MIMO for 5G mobile communication systems. Knife edge diffraction (KED) and a creeping wave linear model are shown to predict diffraction loss around typical building objects from 10 to 26 GHz, and human blockage measurements at 73 GHz are shown to fit a double knife-edge diffraction (DKED) model which incorporates antenna gains. Small-scale spatial fading of millimeter wave received signal voltage amplitude is generally Ricean-distributed for both omnidirectional and directional receive antenna patterns under both line-of-sight (LOS) and non-line-of-sight (NLOS) conditions in most cases, although the log-normal distribution fits measured data better for the omnidirectional receive antenna pattern in the NLOS environment. Small-scale spatial autocorrelations of received voltage amplitudes are shown to fit sinusoidal exponential and exponential functions for LOS and NLOS environments, respectively, with small decorrelation distances of 0.27 cm to 13.6 cm (smaller than the size of a handset) that are favorable for spatial multiplexing. Local area measurements using cluster and route scenarios show how the received signal changes as the mobile moves and transitions from LOS to NLOS locations, with reasonably stationary signal levels within clusters. Wideband mmWave power levels are shown to fade from 0.4 dB/ms to 40 dB/s, depending on travel speed and surroundings.
Exercise modality effect on oxygen uptake off-transient kinetics at maximal oxygen uptake intensity.
Sousa, Ana; Rodríguez, Ferran A; Machado, Leandro; Vilas-Boas, J Paulo; Fernandes, Ricardo J
2015-06-01
What is the central question of this study? Do the mechanical differences between swimming, rowing, running and cycling have a potential effect on the oxygen uptake (V̇O2) off-kinetics after an exercise sustained until exhaustion at 100% of maximal oxygen uptake (V̇O2max) intensity? What is the main finding and its importance? The mechanical differences between exercise modes had a potential effect and contributed to distinct amplitude of the fast component (higher in running compared with cycling) and time constant (higher in swimming compared with rowing and cycling) in the V̇O2 off-kinetic patterns at 100% of V̇O2max intensity. This suggests that swimmers, unlike rowers and cyclists, would benefit more from a longer duration of training intervals after each set of exercise performed at V̇O2max intensity. The kinetics of oxygen uptake (V̇O2) during recovery (off-transient kinetics) for different exercise modes is largely unexplored, hampering the prescription of training and recovery to enhance performance. The purpose of this study was to compare the V̇O2 off-transient kinetics response between swimmers, rowers, runners and cyclists during their specific mode of exercise at 100% of maximal oxygen uptake (V̇O2max) intensity and to examine the on-off symmetry. Groups of swimmers, rowers, runners and cyclists (n = 8 per group) performed (i) an incremental exercise protocol to assess the velocity or power associated with V̇O2max (vV̇O2max or wV̇O2max, respectively) and (ii) a square-wave exercise transition from rest to vV̇O2max/vV̇O2maxwV̇O2maxwV̇O2max until volitional exhaustion. Pulmonary exchange parameters were measured using a telemetric portable gas analyser (K4b(2) ; Cosmed, Rome, Italy), and the on- and off-transient kinetics were analysed through a double-exponential approach. For all exercise modes, both transient periods were symmetrical in shape once they had both been adequately fitted by a double-exponential model. However, differences were found in the off-kinetic parameters between exercise modes; the amplitude of the fast component of the V̇O2 off-response was higher in running compared with cycling (48 ± 5 and 36 ± 7 ml kg(-1) min(-1) , respectively; P < 0.001), and the time constant of the same phase was higher in swimming compared with rowing and cycling (63 ± 5, 56 ± 5 and 55 ± 3 s, respectively; P < 0.001). Although both phases were well described by a double-exponential model, the differences between exercise modes had a potential effect and contributed to distinct V̇O2 off-transient kinetic patterns at 100% of V̇O2max intensity. © 2015 The Authors. Experimental Physiology © 2015 The Physiological Society.
Srinivasan, Prakash; Sarmah, Ajit K; Rohan, Maheswaran
2014-08-01
Single first-order (SFO) kinetic model is often used to derive the dissipation endpoints of an organic chemical in soil. This model is used due to its simplicity and requirement by regulatory agencies. However, using the SFO model for all types of decay pattern could lead to under- or overestimation of dissipation endpoints when the deviation from first-order is significant. In this study the performance of three biphasic kinetic models - bi-exponential decay (BEXP), first-order double exponential decay (FODED), and first-order two-compartment (FOTC) models was evaluated using dissipation datasets of sulfamethoxazole (SMO) antibiotic in three different soils under varying concentration, depth, temperature, and sterile conditions. Corresponding 50% (DT50) and 90% (DT90) dissipation times for the antibiotics were numerically obtained and compared against those obtained using the SFO model. The fit of each model to the measured values was evaluated based on an array of statistical measures such as coefficient of determination (R(2)adj), root mean square error (RMSE), chi-square (χ(2)) test at 1% significance, Bayesian Information Criteria (BIC) and % model error. Box-whisker residual plots were also used to compare the performance of each model to the measured datasets. The antibiotic dissipation was successfully predicted by all four models. However, the nonlinear biphasic models improved the goodness-of-fit parameters for all datasets. Deviations from datasets were also often less evident with the biphasic models. The fits of FOTC and FODED models for SMO dissipation datasets were identical in most cases, and were found to be superior to the BEXP model. Among the biphasic models, the FOTC model was found to be the most suitable for obtaining the endpoints and could provide a mechanistic explanation for SMO dissipation in the soils. Copyright © 2014 Elsevier B.V. All rights reserved.
Historical Patterns of Change: The Lessons of the 1980s.
ERIC Educational Resources Information Center
Geiger, Roger L.
This paper seeks to assess the current state of academic research in light of long-term trends in the development of science. It presents three perspectives on the growth of scientific research: (1) Derek de Solla Price's (1963) hypothesis that science has exhibited exponential growth, roughly doubling every 15 years since the 17th century; (2)…
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
Possible stretched exponential parametrization for humidity absorption in polymers.
Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O
2009-04-01
Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.
Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)
NASA Astrophysics Data System (ADS)
Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar
2012-09-01
The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.
An organizational metamodel for hospital emergency departments.
Kaptan, Kubilay
2014-10-01
I introduce an organizational model describing the response of the hospital emergency department. The hybrid simulation/analytical model (called a "metamodel") can estimate a hospital's capacity and dynamic response in real time and incorporate the influence of damage to structural and nonstructural components on the organizational ones. The waiting time is the main parameter of response and is used to evaluate the disaster resilience of health care facilities. Waiting time behavior is described by using a double exponential function and its parameters are calibrated based on simulated data. The metamodel covers a large range of hospital configurations and takes into account hospital resources in terms of staff and infrastructures, operational efficiency, and the possible existence of an emergency plan; maximum capacity; and behavior both in saturated and overcapacitated conditions. The sensitivity of the model to different arrival rates, hospital configurations, and capacities and the technical and organizational policies applied during and before a disaster were investigated. This model becomes an important tool in the decision process either for the engineering profession or for policy makers.
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Multiple relaxations of the cluster surface diffusion in a homoepitaxial SrTiO3 layer
NASA Astrophysics Data System (ADS)
Woo, Chang-Su; Chu, Kanghyun; Song, Jong-Hyun; Yang, Chan-Ho
2018-03-01
We examine the surface diffusion process of adatomic clusters on a (001)-oriented SrTiO3 single crystal using reflection high energy electron diffraction (RHEED). We find that the recovery curve of the RHEED intensity acquired after a homoepitaxial half-layer growth can be accurately fit into a double exponential function, indicating the existence of two dominant relaxation mechanisms. The characteristic relaxation times at selected growth temperatures are investigated to determine the diffusion activation barriers of 0.67 eV and 0.91 eV, respectively. The Monte Carlo simulation of the cluster hopping model suggests that the decrease in the number of dimeric and trimeric clusters during surface diffusion is the origin of the observed relaxation phenomena.
Early stages of Ostwald ripening
NASA Astrophysics Data System (ADS)
Shneidman, Vitaly A.
2013-07-01
The Becker-Döring (BD) nucleation equation is known to predict a narrow double-exponential front (DEF) in the distribution of growing particles over sizes, which is due to early transient effects. When mass conservation is included, nucleation is eventually exhausted while independent growth is replaced by ripening. Despite the enormous difference in the associated time scales, and the resulting demand on numerics, within the generalized BD model the early DEF is shown to be crucial for the selection of the unique self-similar Lifshitz-Slyozov-Wagner asymptotic regime. Being preserved till the latest stages of growth, the DEF provides a universal part of the initial conditions for the ripening problem, regardless of the mass exchange mechanism between the nucleus and the matrix.
NASA Astrophysics Data System (ADS)
Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico
2005-05-01
In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.
A hybrid MD-kMC algorithm for folding proteins in explicit solvent.
Peter, Emanuel Karl; Shea, Joan-Emma
2014-04-14
We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.
Characteristics of a Linearly Tapered Slot Antenna (LTSA) Conformed Longitudinally Around a Cylinder
NASA Technical Reports Server (NTRS)
Jordan, Jennifer L.; Ponchak, George E.; Tavassolian, Negar; Tentzeris, Manos M.
2007-01-01
The family of tapered slot antennas (TSA s) is suitable for numerous applications. Their ease of fabrication, wide bandwidth, and high gain make them desirable for military and commercial systems. Fabrication on thin, flexible substrates allows the TSA to be conformed over a given body, such as an aircraft wing or a piece of clothing for wearable networks. Previously, a Double Exponentially Tapered Slot Antenna (DETSA) was conformed around an exponential curvature, which showed that the main beam skewed towards the direction of curvature. This paper presents a Linearly Tapered Slot Antenna (LTSA) conformed longitudinally around a cylinder. Measured and simulated radiation patterns and the direction of maximum H co-polarization (Hco) as a function of the cylinder radius are presented.
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Buckling of a stiff thin film on an elastic graded compliant substrate.
Chen, Zhou; Chen, Weiqiu; Song, Jizhou
2017-12-01
The buckling of a stiff film on a compliant substrate has attracted much attention due to its wide applications such as thin-film metrology, surface patterning and stretchable electronics. An analytical model is established for the buckling of a stiff thin film on a semi-infinite elastic graded compliant substrate subjected to in-plane compression. The critical compressive strain and buckling wavelength for the sinusoidal mode are obtained analytically for the case with the substrate modulus decaying exponentially. The rigorous finite element analysis (FEA) is performed to validate the analytical model and investigate the postbuckling behaviour of the system. The critical buckling strain for the period-doubling mode is obtained numerically. The influences of various material parameters on the results are investigated. These results are helpful to provide physical insights on the buckling of elastic graded substrate-supported thin film.
Interaction quantum quenches in the one-dimensional Fermi-Hubbard model
NASA Astrophysics Data System (ADS)
Heidrich-Meisner, Fabian; Bauer, Andreas; Dorfner, Florian; Riegger, Luis; Orso, Giuliano
2016-05-01
We discuss the nonequilibrium dynamics in two interaction quantum quenches in the one-dimensional Fermi-Hubbard model. First, we study the decay of the Néel state as a function of interaction strength. We observe a fast charge dynamics over which double occupancies are built up, while the long-time decay of the staggered moment is controlled by spin excitations, corroborated by the analysis of the entanglement dynamics. Second, we investigate the formation of Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) correlations in a spin-imbalanced system in quenches from the noninteracting case to attractive interactions. Even though the quench puts the system at a finite energy density, peaks at the characteristic FFLO quasimomenta are visible in the quasi-momentum distribution function, albeit with an exponential decay of s-wave pairing correlations. We also discuss the imprinting of FFLO correlations onto repulsively bound pairs and their rapid decay in ramps. Supported by the DFG (Deutsche Forschungsgemeinschaft) via FOR 1807.
Buckling of a stiff thin film on an elastic graded compliant substrate
NASA Astrophysics Data System (ADS)
Chen, Zhou; Chen, Weiqiu; Song, Jizhou
2017-12-01
The buckling of a stiff film on a compliant substrate has attracted much attention due to its wide applications such as thin-film metrology, surface patterning and stretchable electronics. An analytical model is established for the buckling of a stiff thin film on a semi-infinite elastic graded compliant substrate subjected to in-plane compression. The critical compressive strain and buckling wavelength for the sinusoidal mode are obtained analytically for the case with the substrate modulus decaying exponentially. The rigorous finite element analysis (FEA) is performed to validate the analytical model and investigate the postbuckling behaviour of the system. The critical buckling strain for the period-doubling mode is obtained numerically. The influences of various material parameters on the results are investigated. These results are helpful to provide physical insights on the buckling of elastic graded substrate-supported thin film.
Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.
Capozziello, S; Lambiase, G; Saridakis, E N
2017-01-01
We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.
NASA Astrophysics Data System (ADS)
Maitra, Rahul; Akinaga, Yoshinobu; Nakajima, Takahito
2017-08-01
A single reference coupled cluster theory that is capable of including the effect of connected triple excitations has been developed and implemented. This is achieved by regrouping the terms appearing in perturbation theory and parametrizing through two different sets of exponential operators: while one of the exponentials, involving general substitution operators, annihilates the ground state but has a non-vanishing effect when it acts on the excited determinant, the other is the regular single and double excitation operator in the sense of conventional coupled cluster theory, which acts on the Hartree-Fock ground state. The two sets of operators are solved as coupled non-linear equations in an iterative manner without significant increase in computational cost than the conventional coupled cluster theory with singles and doubles excitations. A number of physically motivated and computationally advantageous sufficiency conditions are invoked to arrive at the working equations and have been applied to determine the ground state energies of a number of small prototypical systems having weak multi-reference character. With the knowledge of the correlated ground state, we have reconstructed the triple excitation operator and have performed equation of motion with coupled cluster singles, doubles, and triples to obtain the ionization potential and excitation energies of these molecules as well. Our results suggest that this is quite a reasonable scheme to capture the effect of connected triple excitations as long as the ground state remains weakly multi-reference.
Bridging the Gap between Curriculum Planning Policies and Pre-Service Teachers' Needs
ERIC Educational Resources Information Center
Castro-Garces, Angela Yicely; Arboleda, Argemiro Arboleda
2017-01-01
The challenge and satisfaction of being a teacher is doubled when one has the precious task of being a teacher trainer, as our practices replicate exponentially, touching the lives of people we do not even get to meet. Accordingly, this article presents the analysis of a process that brought tensions to a teacher training program because of the…
The Exponential Growth of Mathematics and Technology at the University of Portsmouth
ERIC Educational Resources Information Center
McCabe, Michael
2009-01-01
The number of students studying university mathematics in the UK has been increasing gradually and linearly since 2002. At the University of Portsmouth, number of students studying mathematics doubled from 30 to 60 between 2002 and 2007, then increased by 240% in just 1 year to over 140 in 2008. This article explains how learning technology has…
Li, Huailiang; Yang, Yigang; Wang, Qibiao; Tuo, Xianguo; Julian Henderson, Mark; Courtois, Jérémie
2017-12-01
The fluence rate of cosmic-ray-induced neutrons (CRINs) varies with many environmental factors. While many current simulation and experimental studies have focused mainly on the altitude variation, the specific rule that the CRINs vary with geomagnetic cutoff rigidity (which is related to latitude and longitude) was not well considered. In this article, a double-exponential fitting function F=(A1e-A2CR+A3)eB1Al, is proposed to evaluate the CRINs' fluence rate varying with geomagnetic cutoff rigidity and altitude. The fitting R2 can have a value up to 0.9954, and, moreover, the CRINs' fluence rate in an arbitrary location (latitude, longitude and altitude) can be easily evaluated by the proposed function. The field measurements of the CRINs' fluence rate and H*(10) rate in Mt. Emei and Mt. Bowa were carried out using a FHT-762 and LB 6411 neutron prober, respectively, and the evaluation results show that the fitting function agrees well with the measurement results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
NASA Astrophysics Data System (ADS)
Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan
2016-11-01
Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
The Spatial Structure of Planform Migration - Curvature Relation of Meandering Rivers
NASA Astrophysics Data System (ADS)
Guneralp, I.; Rhoads, B. L.
2005-12-01
Planform dynamics of meandering rivers have been of fundamental interest to fluvial geomorphologists and engineers because of the intriguing complexity of these dynamics, the role of planform change in floodplain development and landscape evolution, and the economic and social consequences of bank erosion and channel migration. Improved understanding of the complex spatial structure of planform change and capacity to predict these changes are important for effective stream management, engineering and restoration. The planform characteristics of a meandering river channel are integral to its planform dynamics. Active meandering rivers continually change their positions and shapes as a consequence of hydraulic forces exerted on the channel banks and bed, but as the banks and bed change through sediment transport, so do the hydraulic forces. Thus far, this complex feedback between form and process is incompletely understood, despite the fact that the characteristics and the dynamics of meandering rivers have been studied extensively. Current theoretical models aimed at predicting planform dynamics relate rates of meander migration to local and upstream planform curvature where weighting of the influence of curvature on migration rate decays exponentially over distance. This theoretical relation, however, has not been rigorously evaluated empirically. Furthermore, although models based on exponential-weighting of curvature effects yield fairly realistic predictions of meander migration, such models are incapable of reproducing complex forms of bend development, such as double heading or compound looping. This study presents the development of a new methodology based on parametric cubic spline interpolation for the characterization of channel planform and the planform curvature of meandering rivers. The use of continuous mathematical functions overcomes the reliance on bend-averaged values or piece-wise discrete approximations of planform curvature - a major limitation of previous studies. Continuous curvature series can be related to measured rates of lateral migration to explore empirically the relationship between spatially extended curvature and local bend migration. The methodology is applied to a study reach along a highly sinuous section of the Embarras River in Illinois, USA, which contains double-headed asymmetrical loops. To identify patterns of channel planform and rates of lateral migration for a study reach along Embarrass River in central Illinois, geographical information systems analysis of historical aerial photography over a period from 1936 to 1998 was conducted. Results indicate that parametric cubic spline interpolation provides excellent characterization of the complex planforms and planform curvatures of meandering rivers. The findings also indicate that the spatial structure of migration rate-curvature relation may be more complex than a simple exponential distance-decay function. The study represents a first step toward unraveling the spatial structure of planform evolution of meandering rivers and for developing models of planform dynamics that accurately relate spatially extended patterns of channel curvature to local rates of lateral migration. Such knowledge is vital for improving the capacity to accurately predict planform change of meandering rivers.
Azim, M Ekram; Kumarappah, Ananthavalli; Bhavsar, Satyendra P; Backus, Sean M; Arhonditsis, George
2011-03-15
The temporal trends of total mercury (THg) in four fish species in Lake Erie were evaluated based on 35 years of fish contaminant data. Our Bayesian statistical approach consists of three steps aiming to address different questions. First, we used the exponential and mixed-order decay models to assess the declining rates in four intensively sampled fish species, i.e., walleye (Stizostedion vitreum), yellow perch (Perca flavescens), smallmouth bass (Micropterus dolomieui), and white bass (Morone chrysops). Because the two models postulate monotonic decrease of the THg levels, we included first- and second-order random walk terms in our statistical formulations to accommodate nonmonotonic patterns in the data time series. Our analysis identified a recent increase in the THg concentrations, particularly after the mid-1990s. In the second step, we used double exponential models to quantify the relative magnitude of the THg trends depending on the type of data used (skinless-boneless fillet versus whole fish data) and the fish species examined. The observed THg concentrations were significantly higher in skinless boneless fillet than in whole fish portions, while the whole fish portions of walleye exhibited faster decline rates and slower rates of increase relative to the skinless boneless fillet data. Our analysis also shows lower decline rates and higher rates of increase in walleye relative to the other three fish species examined. The food web structural shifts induced by the invasive species (dreissenid mussels and round goby) may be associated with the recent THg trends in Lake Erie fish.
Individuality and universality in the growth-division laws of single E. coli cells
NASA Astrophysics Data System (ADS)
Kennard, Andrew S.; Osella, Matteo; Javer, Avelino; Grilli, Jacopo; Nghe, Philippe; Tans, Sander J.; Cicuta, Pietro; Cosentino Lagomarsino, Marco
2016-01-01
The mean size of exponentially dividing Escherichia coli cells in different nutrient conditions is known to depend on the mean growth rate only. However, the joint fluctuations relating cell size, doubling time, and individual growth rate are only starting to be characterized. Recent studies in bacteria reported a universal trend where the spread in both size and doubling times is a linear function of the population means of these variables. Here we combine experiments and theory and use scaling concepts to elucidate the constraints posed by the second observation on the division control mechanism and on the joint fluctuations of sizes and doubling times. We found that scaling relations based on the means collapse both size and doubling-time distributions across different conditions and explain how the shape of their joint fluctuations deviates from the means. Our data on these joint fluctuations highlight the importance of cell individuality: Single cells do not follow the dependence observed for the means between size and either growth rate or inverse doubling time. Our calculations show that these results emerge from a broad class of division control mechanisms requiring a certain scaling form of the "division hazard rate function," which defines the probability rate of dividing as a function of measurable parameters. This "model free" approach gives a rationale for the universal body-size distributions observed in microbial ecosystems across many microbial species, presumably dividing with multiple mechanisms. Additionally, our experiments show a crossover between fast and slow growth in the relation between individual-cell growth rate and division time, which can be understood in terms of different regimes of genome replication control.
NASA Astrophysics Data System (ADS)
Allen, Linda J. S.
2016-09-01
Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,
Neutralization of Staphylococcal Enterotoxin B by an Aptamer Antagonist
Wang, Kaiyu; Gan, Longjie; Jiang, Li; Zhang, Xianhui; Yang, Xiangyue; Chen, Min
2015-01-01
Staphylococcal enterotoxin B (SEB) is a major virulence factor for staphylococcal toxic shock syndrome (TSS). SEB activates a large subset of the T lymphocytic population, releasing proinflammatory cytokines. Blocking SEB-initiated toxicity may be an effective strategy for treating TSS. Using a process known as systematic evolution of ligands by exponential enrichment (SELEX), we identified an aptamer that can antagonize SEB with nanomolar binding affinity (Kd = 64 nM). The aptamer antagonist effectively inhibits SEB-mediated proliferation and cytokine secretion in human peripheral blood mononuclear cells. Moreover, a PEGylated aptamer antagonist significantly reduced mortality in a “double-hit” mouse model of SEB-induced TSS, established via sensitization with d-galactosamine followed by SEB challenge. Therefore, our novel aptamer antagonist may offer potential therapeutic efficacy against SEB-mediated TSS. PMID:25624325
NASA Astrophysics Data System (ADS)
Iadecola, Thomas; Hsieh, Timothy H.
2018-05-01
We show that time-reflection symmetry in periodically driven (Floquet) quantum systems enables an inherently nonequilibrium phenomenon structurally similar to quantum-mechanical supersymmetry. In particular, we find Floquet analogs of the Witten index that place lower bounds on the degeneracies of states with quasienergies 0 and π . Moreover, we show that in some cases time-reflection symmetry can also interchange fermions and bosons, leading to fermion-boson pairs with opposite quasienergy. We provide a simple class of disordered, interacting, and ergodic Floquet models with an exponentially large number of states at quasienergies 0 and π , which are robust as long as the time-reflection symmetry is preserved. Floquet supersymmetry manifests itself in the evolution of certain local observables as a period-doubling effect with dramatic finite-size scaling, providing a clear signature for experiments.
Intermittent Lagrangian velocities and accelerations in three-dimensional porous medium flow.
Holzner, M; Morales, V L; Willmann, M; Dentz, M
2015-07-01
Intermittency of Lagrangian velocity and acceleration is a key to understanding transport in complex systems ranging from fluid turbulence to flow in porous media. High-resolution optical particle tracking in a three-dimensional (3D) porous medium provides detailed 3D information on Lagrangian velocities and accelerations. We find sharp transitions close to pore throats, and low flow variability in the pore bodies, which gives rise to stretched exponential Lagrangian velocity and acceleration distributions characterized by a sharp peak at low velocity, superlinear evolution of particle dispersion, and double-peak behavior in the propagators. The velocity distribution is quantified in terms of pore geometry and flow connectivity, which forms the basis for a continuous-time random-walk model that sheds light on the observed Lagrangian flow and transport behaviors.
A Simulation To Model Exponential Growth.
ERIC Educational Resources Information Center
Appelbaum, Elizabeth Berman
2000-01-01
Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)
Myśliwa-Kurdziel, Beata; Solymosi, Katalin; Kruk, Jerzy; Böddi, Béla; Strzałka, Kazimierz
2007-03-01
The steady-state and time-resolved fluorescence characteristics of protochlorophyll (Pchl) dissolved in neat Triton X-100 and in Triton X-100 micelles were investigated, and the fluorescence lifetimes of different Pchl spectral forms were studied. Varying the concentration of Pchl or diluting the micellar solutions either with a buffer or with a micellar solution, 631-634, 645-655, 680-692 and above 700 nm emitting Pchl complexes were prepared, the ratios of which varied from one another. The fluorescence decay of the 631-634 nm emitting (monomeric) form had a mono-exponential character with a 5.4-ns fluorescence lifetime. The long-wavelength Pchl complexes (aggregates) had two fluorescence lifetime values within a range of 1.4-3.9 ns and 0.15-0.84 ns, which showed high variability in different environments. Depending on the conditions, either mono- or double-exponential fluorescence decay was found for a fluorescence band at 680-685 nm. These data show that despite their very similar steady-state fluorescence properties, Pchl complexes can differ in fluorescence lifetimes, which may reflect different molecular structures, intrinsic geometries or different molecular interactions. This underlines the importance of complex spectroscopic analysis for a precise description of native and artificial chlorophyllous pigment forms.
NASA Astrophysics Data System (ADS)
Marston, Philip L.
2003-04-01
The coupling of sound to buried targets can be associated with acoustic evanescent waves when the sea bottom is smooth. To understand the excitation of guided waves on buried fluid cylinders and shells by acoustic evanescent waves and the associated target resonances, the two-dimensional partial wave series for the scattering is found for normal incidence in an unbounded medium. The shell formulation uses the simplifications of thin-shell dynamics. The expansion of the incident wave becomes a double summation with products of modified and ordinary Bessel functions [P. L. Marston, J. Acoust. Soc. Am. 111, 2378 (2002)]. Unlike the case of an ordinary incident wave, the counterpropagating partial waves of the same angular order have unequal magnitudes when the incident wave is evanescent. This is a consequence of the exponential dependence of the incident wave amplitude on depth. Some consequences of this imbalance of partial-wave amplitudes are given by modifying previous ray theory for the scattering [P. L. Marston and N. H. Sun, J. Acoust. Soc. Am. 97, 777-783 (1995)]. The exponential dependence of the scattering on the location of a scatterer was previously demonstrated in air [T. J. Matula and P. L. Marston, J. Acoust. Soc. Am. 93, 1192-1195 (1993)].
NASA Astrophysics Data System (ADS)
Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In
2017-08-01
In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
Self-charging of identical grains in the absence of an external field.
Yoshimatsu, R; Araújo, N A M; Wurm, G; Herrmann, H J; Shinbrot, T
2017-01-06
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Self-charging of identical grains in the absence of an external field
NASA Astrophysics Data System (ADS)
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Something from nothing: self-charging of identical grains
NASA Astrophysics Data System (ADS)
Shinbrot, Troy; Yoshimatsu, Ryuta; Nuno Araujo, Nuno; Wurm, Gerhard; Herrmann, Hans
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. I acknowledge support from NSF/DMR, award 1404792.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
State of charge modeling of lithium-ion batteries using dual exponential functions
NASA Astrophysics Data System (ADS)
Kuo, Ting-Jung; Lee, Kung-Yen; Huang, Chien-Kang; Chen, Jau-Horng; Chiu, Wei-Li; Huang, Chih-Fang; Wu, Shuen-De
2016-05-01
A mathematical model is developed by fitting the discharging curve of LiFePO4 batteries and used to investigate the relationship between the state of charge and the closed-circuit voltage. The proposed mathematical model consists of dual exponential terms and a constant term which can fit the characteristics of dual equivalent RC circuits closely, representing a LiFePO4 battery. One exponential term presents the stable discharging behavior and the other one presents the unstable discharging behavior and the constant term presents the cut-off voltage.
Self-charging of identical grains in the absence of an external field
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. PMID:28059124
Virtual Observatory and Distributed Data Mining
NASA Astrophysics Data System (ADS)
Borne, Kirk D.
2012-03-01
New modes of discovery are enabled by the growth of data and computational resources (i.e., cyberinfrastructure) in the sciences. This cyberinfrastructure includes structured databases, virtual observatories (distributed data, as described in Section 20.2.1 of this chapter), high-performance computing (petascale machines), distributed computing (e.g., the Grid, the Cloud, and peer-to-peer networks), intelligent search and discovery tools, and innovative visualization environments. Data streams from experiments, sensors, and simulations are increasingly complex and growing in volume. This is true in most sciences, including astronomy, climate simulations, Earth observing systems, remote sensing data collections, and sensor networks. At the same time, we see an emerging confluence of new technologies and approaches to science, most clearly visible in the growing synergism of the four modes of scientific discovery: sensors-modeling-computing-data (Eastman et al. 2005). This has been driven by numerous developments, including the information explosion, development of large-array sensors, acceleration in high-performance computing (HPC) power, advances in algorithms, and efficient modeling techniques. Among these, the most extreme is the growth in new data. Specifically, the acquisition of data in all scientific disciplines is rapidly accelerating and causing a data glut (Bell et al. 2007). It has been estimated that data volumes double every year—for example, the NCSA (National Center for Supercomputing Applications) reported that their users cumulatively generated one petabyte of data over the first 19 years of NCSA operation, but they then generated their next one petabyte in the next year alone, and the data production has been growing by almost 100% each year after that (Butler 2008). The NCSA example is just one of many demonstrations of the exponential (annual data-doubling) growth in scientific data collections. In general, this putative data-doubling is an inevitable result of several compounding factors: the proliferation of data-generating devices, sensors, projects, and enterprises; the 18-month doubling of the digital capacity of these microprocessor-based sensors and devices (commonly referred to as "Moore’s law"); the move to digital for nearly all forms of information; the increase in human-generated data (both unstructured information on the web and structured data from experiments, models, and simulation); and the ever-expanding capability of higher density media to hold greater volumes of data (i.e., data production expands to fill the available storage space). These factors are consequently producing an exponential data growth rate, which will soon (if not already) become an insurmountable technical challenge even with the great advances in computation and algorithms. This technical challenge is compounded by the ever-increasing geographic dispersion of important data sources—the data collections are not stored uniformly at a single location, or with a single data model, or in uniform formats and modalities (e.g., images, databases, structured and unstructured files, and XML data sets)—the data are in fact large, distributed, heterogeneous, and complex. The greatest scientific research challenge with these massive distributed data collections is consequently extracting all of the rich information and knowledge content contained therein, thus requiring new approaches to scientific research. This emerging data-intensive and data-oriented approach to scientific research is sometimes called discovery informatics or X-informatics (where X can be any science, such as bio, geo, astro, chem, eco, or anything; Agresti 2003; Gray 2003; Borne 2010). This data-oriented approach to science is now recognized by some (e.g., Mahootian and Eastman 2009; Hey et al. 2009) as the fourth paradigm of research, following (historically) experiment/observation, modeling/analysis, and computational science.
Gao, Bing; Ju, Xiaotang; Su, Fang; Gao, Fengbin; Cao, Qingsen; Oenema, Oene; Christie, Peter; Chen, Xinping; Zhang, Fusuo
2013-01-01
We monitored soil respiration (Rs), soil temperature (T) and volumetric water content (VWC%) over four years in one typical conventional and four alternative cropping systems to understand Rs in different cropping systems with their respective management practices and environmental conditions. The control was conventional double-cropping system (winter wheat and summer maize in one year - Con.W/M). Four alternative cropping systems were designed with optimum water and N management, i.e. optimized winter wheat and summer maize (Opt.W/M), three harvests every two years (first year, winter wheat and summer maize or soybean; second year, fallow then spring maize - W/M-M and W/S-M), and single spring maize per year (M). Our results show that Rs responded mainly to the seasonal variation in T but was also greatly affected by straw return, root growth and soil moisture changes under different cropping systems. The mean seasonal CO2 emissions in Con.W/M were 16.8 and 15.1 Mg CO2 ha−1 for summer maize and winter wheat, respectively, without straw return. They increased significantly by 26 and 35% in Opt.W/M, respectively, with straw return. Under the new alternative cropping systems with straw return, W/M-M showed similar Rs to Opt.W/M, but total CO2 emissions of W/S-M decreased sharply relative to Opt.W/M when soybean was planted to replace summer maize. Total CO2 emissions expressed as the complete rotation cycles of W/S-M, Con.W/M and M treatments were not significantly different. Seasonal CO2 emissions were significantly correlated with the sum of carbon inputs of straw return from the previous season and the aboveground biomass in the current season, which explained 60% of seasonal CO2 emissions. T and VWC% explained up to 65% of Rs using the exponential-power and double exponential models, and the impacts of tillage and straw return must therefore be considered for accurate modeling of Rs in this geographical region. PMID:24278340
Gao, Bing; Ju, Xiaotang; Su, Fang; Gao, Fengbin; Cao, Qingsen; Oenema, Oene; Christie, Peter; Chen, Xinping; Zhang, Fusuo
2013-01-01
We monitored soil respiration (Rs), soil temperature (T) and volumetric water content (VWC%) over four years in one typical conventional and four alternative cropping systems to understand Rs in different cropping systems with their respective management practices and environmental conditions. The control was conventional double-cropping system (winter wheat and summer maize in one year--Con.W/M). Four alternative cropping systems were designed with optimum water and N management, i.e. optimized winter wheat and summer maize (Opt.W/M), three harvests every two years (first year, winter wheat and summer maize or soybean; second year, fallow then spring maize--W/M-M and W/S-M), and single spring maize per year (M). Our results show that Rs responded mainly to the seasonal variation in T but was also greatly affected by straw return, root growth and soil moisture changes under different cropping systems. The mean seasonal CO2 emissions in Con.W/M were 16.8 and 15.1 Mg CO2 ha(-1) for summer maize and winter wheat, respectively, without straw return. They increased significantly by 26 and 35% in Opt.W/M, respectively, with straw return. Under the new alternative cropping systems with straw return, W/M-M showed similar Rs to Opt.W/M, but total CO2 emissions of W/S-M decreased sharply relative to Opt.W/M when soybean was planted to replace summer maize. Total CO2 emissions expressed as the complete rotation cycles of W/S-M, Con.W/M and M treatments were not significantly different. Seasonal CO2 emissions were significantly correlated with the sum of carbon inputs of straw return from the previous season and the aboveground biomass in the current season, which explained 60% of seasonal CO2 emissions. T and VWC% explained up to 65% of Rs using the exponential-power and double exponential models, and the impacts of tillage and straw return must therefore be considered for accurate modeling of Rs in this geographical region.
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
Kinetics of DNA Tile Dimerization
2015-01-01
Investigating how individual molecular components interact with one another within DNA nanoarchitectures, both in terms of their spatial and temporal interactions, is fundamentally important for a better understanding of their physical behaviors. This will provide researchers with valuable insight for designing more complex higher-order structures that can be assembled more efficiently. In this report, we examined several spatial factors that affect the kinetics of bivalent, double-helical (DH) tile dimerization, including the orientation and number of sticky ends (SEs), the flexibility of the double helical domains, and the size of the tiles. The rate constants we obtained confirm our hypothesis that increased nucleation opportunities and well-aligned SEs accelerate tile–tile dimerization. Increased flexibility in the tiles causes slower dimerization rates, an effect that can be reversed by introducing restrictions to the tile flexibility. The higher dimerization rates of more rigid tiles results from the opposing effects of higher activation energies and higher pre-exponential factors from the Arrhenius equation, where the pre-exponential factor dominates. We believe that the results presented here will assist in improved implementation of DNA tile based algorithmic self-assembly, DNA based molecular robotics, and other specific nucleic acid systems, and will provide guidance to design and assembly processes to improve overall yield and efficiency. PMID:24794259
Kinetics of DNA tile dimerization.
Jiang, Shuoxing; Yan, Hao; Liu, Yan
2014-06-24
Investigating how individual molecular components interact with one another within DNA nanoarchitectures, both in terms of their spatial and temporal interactions, is fundamentally important for a better understanding of their physical behaviors. This will provide researchers with valuable insight for designing more complex higher-order structures that can be assembled more efficiently. In this report, we examined several spatial factors that affect the kinetics of bivalent, double-helical (DH) tile dimerization, including the orientation and number of sticky ends (SEs), the flexibility of the double helical domains, and the size of the tiles. The rate constants we obtained confirm our hypothesis that increased nucleation opportunities and well-aligned SEs accelerate tile-tile dimerization. Increased flexibility in the tiles causes slower dimerization rates, an effect that can be reversed by introducing restrictions to the tile flexibility. The higher dimerization rates of more rigid tiles results from the opposing effects of higher activation energies and higher pre-exponential factors from the Arrhenius equation, where the pre-exponential factor dominates. We believe that the results presented here will assist in improved implementation of DNA tile based algorithmic self-assembly, DNA based molecular robotics, and other specific nucleic acid systems, and will provide guidance to design and assembly processes to improve overall yield and efficiency.
A Simulation of the ECSS Help Desk with the Erlang a Model
2011-03-01
a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
Compensation of strong thermal lensing in high-optical-power cavities.
Zhao, C; Degallaix, J; Ju, L; Fan, Y; Blair, D G; Slagmolen, B J J; Gray, M B; Lowry, C M Mow; McClelland, D E; Hosken, D J; Mudge, D; Brooks, A; Munch, J; Veitch, P J; Barton, M A; Billingsley, G
2006-06-16
In an experiment to simulate the conditions in high optical power advanced gravitational wave detectors, we show for the first time that the time evolution of strong thermal lenses follows the predicted infinite sum of exponentials (approximated by a double exponential), and that such lenses can be compensated using an intracavity compensation plate heated on its cylindrical surface. We show that high finesse approximately 1400 can be achieved in cavities with internal compensation plates, and that mode matching can be maintained. The experiment achieves a wave front distortion similar to that expected for the input test mass substrate in the Advanced Laser Interferometer Gravitational Wave Observatory, and shows that thermal compensation schemes are viable. It is also shown that the measurements allow a direct measurement of substrate optical absorption in the test mass and the compensation plate.
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
Numerical Modeling and Testing of an Inductively-Driven and High-Energy Pulsed Plasma Thrusters
NASA Technical Reports Server (NTRS)
Parma, Brian
2004-01-01
Pulsed Plasma Thrusters (PPTs) are advanced electric space propulsion devices that are characterized by simplicity and robustness. They suffer, however, from low thrust efficiencies. This summer, two approaches to improve the thrust efficiency of PPTs will be investigated through both numerical modeling and experimental testing. The first approach, an inductively-driven PPT, uses a double-ignition circuit to fire two PPTs in succession. This effectively changes the PPTs configuration from an LRC circuit to an LR circuit. The LR circuit is expected to provide better impedance matching and improving the efficiency of the energy transfer to the plasma. An added benefit of the LR circuit is an exponential decay of the current, whereas a traditional PPT s under damped LRC circuit experiences the characteristic "ringing" of its current. The exponential decay may provide improved lifetime and sustained electromagnetic acceleration. The second approach, a high-energy PPT, is a traditional PPT with a variable size capacitor bank. This PPT will be simulated and tested at energy levels between 100 and 450 joules in order to investigate the relationship between efficiency and energy level. Arbitrary Coordinate Hydromagnetic (MACH2) code is used. The MACH2 code, designed by the Center for Plasma Theory and Computation at the Air Force Research Laboratory, has been used to gain insight into a variety of plasma problems, including electric plasma thrusters. The goals for this summer include numerical predictions of performance for both the inductively-driven PPT and high-energy PFT, experimental validation of the numerical models, and numerical optimization of the designs. These goals will be met through numerical and experimental investigation of the PPTs current waveforms, mass loss (or ablation), and impulse bit characteristics.
Method and apparatus for adaptive force and position control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabayan, H.S.; Zicker, J.D.
The amplitudes of currents due to lightning are considerably larger than NEMP induced currents both in the time and frequency domains. The more important quantity for aperture illumination is the rate of rise of the current. The analysis performed for this in this memorandum is unsatisfactory since the artificial double exponential model was used. Still, the lightning rate of rise is only twice as high as that due to NEMP even when the absolute worst (or presently known) lightning pulse is used. A much better way to do this comparison is to use an actual LEMP data and NEMP frommore » an actual weapon. Furthermore, because of lack of data, no electric field analysis was undertaken.« less
Mathematical Modeling of Extinction of Inhomogeneous Populations
Karev, G.P.; Kareva, I.
2016-01-01
Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117
Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.
ERIC Educational Resources Information Center
Mandell, Marvin B.; Bretschneider, Stuart I.
1984-01-01
The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)
Double plasma resonance instability as a source of solar zebra emission
NASA Astrophysics Data System (ADS)
Benáček, J.; Karlický, M.
2018-03-01
Context. The double plasma resonance (DPR) instability plays a basic role in the generation of solar radio zebras. In the plasma, consisting of the loss-cone type distribution of hot electrons and much denser and colder background plasma, this instability generates the upper-hybrid waves, which are then transformed into the electromagnetic waves and observed as radio zebras. Aims: In the present paper we numerically study the double plasma resonance instability from the point of view of the zebra interpretation. Methods: We use a 3-dimensional electromagnetic particle-in-cell (3D PIC) relativistic model. We use this model in two versions: (a) a spatially extended "multi-mode" model and (b) a spatially limited "specific-mode" model. While the multi-mode model is used for detailed computations and verifications of the results obtained by the "specific-mode" model, the specific-mode model is used for computations in a broad range of model parameters, which considerably save computational time. For an analysis of the computational results, we developed software tools in Python. Results: First using the multi-mode model, we study details of the double plasma resonance instability. We show how the distribution function of hot electrons changes during this instability. Then we show that there is a very good agreement between results obtained by the multi-mode and specific-mode models, which is caused by a dominance of the wave with the maximal growth rate. Therefore, for computations in a broad range of model parameters, we use the specific-mode model. We compute the maximal growth rates of the double plasma resonance instability with a dependence on the ratio between the upper-hybrid ωUH and electron-cyclotron ωce frequency. We vary temperatures of both the hot and background plasma components and study their effects on the resulting growth rates. The results are compared with the analytical ones. We find a very good agreement between numerical and analytical growth rates. We also compute saturation energies of the upper-hybrid waves in a very broad range of parameters. We find that the saturation energies of the upper-hybrid waves show maxima and minima at almost the same values of ωUH/ωce as the growth rates, but with a higher contrast between them than the growth rate maxima and minima. The contrast between saturation energy maxima and minima increases when the temperature of hot electrons increases. Furthermore, we find that the saturation energy of the upper-hybrid waves is proportional to the density of hot electrons. The maximum saturated energy can be up to one percent of the kinetic energy of hot electrons. Finally we find that the saturation energy maxima in the interval of ωUH/ωce = 3-18 decrease according to the exponential function. All these findings can be used in the interpretation of solar radio zebras.
USDA-ARS?s Scientific Manuscript database
A new mechanistic growth model was developed to describe microbial growth under isothermal conditions. The new mathematical model was derived from the basic observation of bacterial growth that may include lag, exponential, and stationary phases. With this model, the lag phase duration and exponen...
Makrinich, Maria; Gupta, Rupal; Polenova, Tatyana; Goldbourt, Amir
The ability of various pulse types, which are commonly applied for distance measurements, to saturate or invert quadrupolar spin polarization has been compared by observing their effect on magnetization recovery curves under magic-angle spinning. A selective central transition inversion pulse yields a bi-exponential recovery for a diamagnetic sample with a spin-3/2, consistent with the existence of two processes: the fluctuations of the electric field gradients with identical single (W 1 ) and double (W 2 ) quantum quadrupolar-driven relaxation rates, and spin exchange between the central transition of one spin and satellite transitions of a dipolar-coupled similar spin. Using a phase modulated pulse, developed for distance measurements in quadrupolar spins (Nimerovsky et al., JMR 244, 2014, 107-113) and suggested for achieving the complete saturation of all quadrupolar spin energy levels, a mono-exponential relaxation model fits the data, compatible with elimination of the spin exchange processes. Other pulses such as an adiabatic pulse lasting one-third of a rotor period, and a two-rotor-period long continuous-wave pulse, both used for distance measurements under special experimental conditions, yield good fits to bi-exponential functions with varying coefficients and time constants due to variations in initial conditions. Those values are a measure of the extent of saturation obtained from these pulses. An empirical fit of the recovery curves to a stretched exponential function can provide general recovery times. A stretching parameter very close to unity, as obtained for a phase modulated pulse but not for other cases, suggests that in this case recovery times and longitudinal relaxation times are similar. The results are experimentally demonstrated for compounds containing 11 B (spin-3/2) and 51 V (spin-7/2). We propose that accurate spin lattice relaxation rates can be measured by a short phase modulated pulse (<1-2ms), similarly to the "true T 1 " measured by saturation with an asynchronous pulse train (Yesinowski, JMR 252, 2015, 135-144). Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ozawa, T.; Miyagi, Y.
2017-12-01
Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.
Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-01-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-06-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.
Teaching the Verhulst Model: A Teaching Experiment in Covariational Reasoning and Exponential Growth
ERIC Educational Resources Information Center
Castillo-Garsow, Carlos
2010-01-01
Both Thompson and the duo of Confrey and Smith describe how students might be taught to build "ways of thinking" about exponential behavior by coordinating the covariation of two changing quantities, however, these authors build exponential behavior from different meanings of covariation. Confrey and Smith advocate beginning with discrete additive…
Review of "Going Exponential: Growing the Charter School Sector's Best"
ERIC Educational Resources Information Center
Garcia, David
2011-01-01
This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…
Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz
NASA Astrophysics Data System (ADS)
Nikitenko, Ya.
2016-11-01
To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.
NASA Technical Reports Server (NTRS)
Isaacson, D.; Isaacson, E. L.; Paes-Leme, P. J.; Marchesin, D.
1981-01-01
Several methods for computing many eigenvalues and eigenfunctions of a single anharmonic oscillator Schroedinger operator whose potential may have one or two minima are described. One of the methods requires the solution of an ill-conditioned generalized eigenvalue problem. This method has the virtue of using a bounded amount of work to achieve a given accuracy in both the single and double well regions. Rigorous bounds are given, and it is proved that the approximations converge faster than any inverse power of the size of the matrices needed to compute them. The results of computations for the g:phi(4):1 theory are presented. These results indicate that the methods actually converge exponentially fast.
Shneidman, Vitaly A
2009-10-28
A typical nucleation-growth process is considered: a system is quenched into a supersaturated state with a small critical radius r( *) (-) and is allowed to nucleate during a finite time interval t(n), after which the supersaturation is abruptly reduced to a fixed value with a larger critical radius r( *) (+). The size-distribution of nucleated particles f(r,t) further evolves due to their deterministic growth and decay for r larger or smaller than r( *) (+), respectively. A general analytic expressions for f(r,t) is obtained, and it is shown that after a large growth time t this distribution approaches an asymptotic shape determined by two dimensionless parameters, lambda related to t(n), and Lambda=r( *) (+)/r( *) (-). This shape is strongly asymmetric with an exponential and double-exponential cutoffs at small and large sizes, respectively, and with a broad near-flat top in case of a long pulse. Conversely, for a short pulse the distribution acquires a distinct maximum at r=r(max)(t) and approaches a universal shape exp[zeta-e(zeta)], with zeta proportional to r-r(max), independent of the pulse duration. General asymptotic predictions are examined in terms of Zeldovich-Frenkel nucleation model where the entire transient behavior can be described in terms of the Lambert W function. Modifications for the Turnbull-Fisher model are also considered, and analytics is compared with exact numerics. Results are expected to have direct implementations in analysis of two-step annealing crystallization experiments, although other applications might be anticipated due to universality of the nucleation pulse technique.
McGinness, Colleen A; Slater, C Stewart; Savelski, Mariano J
2008-12-01
Pervaporation technology can effectively separate a tetrahydrofuran (THF) solvent-water waste stream at an azeotropic concentration. The performance of a Sulzer 2210 polyvinyl alcohol (PVA) membrane and a Pervatech BV silica membrane were studied, as the operating variables feed temperature and permeate pressure, were varied. The silica membrane was found to exhibit a flux of almost double that of the PVA membrane, but both membranes had comparable separation ability in purifying the solvent-water mixture. At benchmark feed conditions of 96 wt% THF and 4 wt% water, 50 degrees C and 10 torr permeate pressure, the silica membrane flux was 0.276 kg/m(2)hr and selectivity was 365. For both membranes, flux was found to increase at an exponential rate as the feed temperature increased from 20 to 60 degrees C. The flux through the silica membrane increases at a 6% faster rate than the PVA membrane. Flux decreased as permeate pressure was increased from 5 to 25 torr for both membranes. The amount of water in the permeate decreased exponentially as the permeate pressure was increased, but increased linearly with increasing temperature. Optimum conditions for flux and selectivity are at low permeate pressure and high feed temperature. When a small amount of salt is added to the feed solution, an increase in flux is observed. Overall models for flux and permeate concentration were created from the experimental data. The models were used to predict scale-up performance in separating an azeotropic feed waste to produce dehydrated THF solvent for reuse and a permeate stream with a dilute THF concentration.
McKellar, Robin C
2008-01-15
Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
NASA Astrophysics Data System (ADS)
Lengline, O.; Marsan, D.; Got, J.; Pinel, V.
2007-12-01
The evolution of the seismicity at three basaltic volcanoes (Kilauea, Mauna-Loa and Piton de la Fournaise) is analysed during phases of magma accumulation. We show that the VT seismicity during these time-periods is characterized by an exponential increase at long-time scale (years). Such an exponential acceleration can be explained by a model of seismicity forced by the replenishment of a magmatic reservoir. The increase in stress in the edifice caused by this replenishment is modeled. This stress history leads to a cumulative number of damage, ie VT earthquakes, following the same exponential increase as found for seismicity. A long-term seismicity precursor is thus detected at basaltic volcanoes. Although this precursory signal is not able to predict the onset times of futures eruptions (as no diverging point is present in the model), it may help mitigating volcanic hazards.
Multiserver Queueing Model subject to Single Exponential Vacation
NASA Astrophysics Data System (ADS)
Vijayashree, K. V.; Janani, B.
2018-04-01
A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.
Vadeby, Anna; Forsman, Åsa
2017-06-01
This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; AliShaykhian, Gholam
2010-01-01
We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.
Predator prey oscillations in a simple cascade model of drift wave turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berionni, V.; Guercan, Oe. D.
2011-11-15
A reduced three shell limit of a simple cascade model of drift wave turbulence, which emphasizes nonlocal interactions with a large scale mode, is considered. It is shown to describe both the well known predator prey dynamics between the drift waves and zonal flows and to reduce to the standard three wave interaction equations. Here, this model is considered as a dynamical system whose characteristics are investigated. The analytical solutions for the purely nonlinear limit are given in terms of the Jacobi elliptic functions. An approximate analytical solution involving Jacobi elliptic functions and exponential growth is computed using scale separationmore » for the case of unstable solutions that are observed when the energy injection rate is high. The fixed points of the system are determined, and the behavior around these fixed points is studied. The system is shown to display periodic solutions corresponding to limit cycle oscillations, apparently chaotic phase space orbits, as well as unstable solutions that grow slowly while oscillating rapidly. The period doubling route to transition to chaos is examined.« less
An infrastructure for accurate characterization of single-event transients in digital circuits.
Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael
2013-11-01
We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.
An infrastructure for accurate characterization of single-event transients in digital circuits☆
Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael
2013-01-01
We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694
Walter, N G; Strunk, G
1994-01-01
Strand displacement amplification is an isothermal DNA amplification reaction based on a restriction endonuclease nicking its recognition site and a polymerase extending the nick at its 3' end, displacing the downstream strand. The reaction resembles rolling-circle replication of single-stranded phages and small plasmids. The displaced sense strand serves as target for an antisense reaction and vice versa, resulting in exponential growth and the autocatalytic nature of this in vitro reaction as long as the template is the limiting agent. We describe the optimization of strand displacement amplification for in vitro evolution experiments under serial transfer conditions. The reaction was followed and controlled by use of the fluorescent dye thiazole orange binding to the amplified DNA. We were able to maintain exponential growth conditions with a doubling time of 3.0 min throughout 100 transfers or approximately 350 molecular generations by using an automatic handling device. Homology of in vitro amplification with rolling-circle replication was mirrored by the occurring evolutionary processes. Deletion events most likely caused by a slipped mispairing mechanism as postulated for in vivo replication took place. Under our conditions, the mutation rate was high and a molecular quasi-species formed with a mutant lacking internal hairpin formation ability and thus outgrowing all other species under dGTP/dCTP deficiency. Images PMID:8058737
Long-range electrostatic screening in ionic liquids
Gebbie, Matthew A.; Dobbs, Howard A.; Valtiner, Markus; Israelachvili, Jacob N.
2015-01-01
Electrolyte solutions with high concentrations of ions are prevalent in biological systems and energy storage technologies. Nevertheless, the high interaction free energy and long-range nature of electrostatic interactions makes the development of a general conceptual picture of concentrated electrolytes a significant challenge. In this work, we study ionic liquids, single-component liquids composed solely of ions, in an attempt to provide a novel perspective on electrostatic screening in very high concentration (nonideal) electrolytes. We use temperature-dependent surface force measurements to demonstrate that the long-range, exponentially decaying diffuse double-layer forces observed across ionic liquids exhibit a pronounced temperature dependence: Increasing the temperature decreases the measured exponential (Debye) decay length, implying an increase in the thermally driven effective free-ion concentration in the bulk ionic liquids. We use our quantitative results to propose a general model of long-range electrostatic screening in ionic liquids, where thermally activated charge fluctuations, either free ions or correlated domains (quasiparticles), take on the role of ions in traditional dilute electrolyte solutions. This picture represents a crucial step toward resolving several inconsistencies surrounding electrostatic screening and charge transport in ionic liquids that have impeded progress within the interdisciplinary ionic liquids community. More broadly, our work provides a previously unidentified way of envisioning highly concentrated electrolytes, with implications for diverse areas of inquiry, ranging from designing electrochemical devices to rationalizing electrostatic interactions in biological systems. PMID:26040001
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
Comparison of kinetic model for biogas production from corn cob
NASA Astrophysics Data System (ADS)
Shitophyta, L. M.; Maryudi
2018-04-01
Energy demand increases every day, while the energy source especially fossil energy depletes increasingly. One of the solutions to overcome the energy depletion is to provide renewable energies such as biogas. Biogas can be generated by corn cob and food waste. In this study, biogas production was carried out by solid-state anaerobic digestion. The steps of biogas production were the preparation of feedstock, the solid-state anaerobic digestion, and the measurement of biogas volume. This study was conducted on TS content of 20%, 22%, and 24%. The aim of this research was to compare kinetic models of biogas production from corn cob and food waste as a co-digestion using the linear, exponential equation, and first-kinetic models. The result showed that the exponential equation had a better correlation than the linear equation on the ascending graph of biogas production. On the contrary, the linear equation had a better correlation than the exponential equation on the descending graph of biogas production. The correlation values on the first-kinetic model had the smallest value compared to the linear and exponential models.
NASA Astrophysics Data System (ADS)
Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye
2016-10-01
With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
Huang, Zhihao; Zhao, Junfei; Wang, Zimu; Meng, Fanying; Ding, Kunshan; Pan, Xiangqiang; Zhou, Nianchen; Li, Xiaopeng; Zhang, Zhengbiao; Zhu, Xiulin
2017-10-23
Orthogonal maleimide and thiol deprotections were combined with thiol-maleimide coupling to synthesize discrete oligomers/macromolecules on a gram scale with molecular weights up to 27.4 kDa (128mer, 7.9 g) using an iterative exponential growth strategy with a degree of polymerization (DP) of 2 n -1. Using the same chemistry, a "readable" sequence-defined oligomer and a discrete cyclic topology were also created. Furthermore, uniform dendrons were fabricated using sequential growth (DP=2 n -1) or double exponential dendrimer growth approaches (DP=22n -1) with significantly accelerated growth rates. A versatile, efficient, and metal-free method for construction of discrete oligomers with tailored structures and a high growth rate would greatly facilitate research into the structure-property relationships of sophisticated polymeric materials. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ultrafast hole carrier relaxation dynamics in p-type CuO nanowires
2011-01-01
Ultrafast hole carrier relaxation dynamics in CuO nanowires have been investigated using transient absorption spectroscopy. Following femtosecond pulse excitation in a non-collinear pump-probe configuration, a combination of non-degenerate transmission and reflection measurements reveal initial ultrafast state filling dynamics independent of the probing photon energy. This behavior is attributed to the occupation of states by photo-generated carriers in the intrinsic hole region of the p-type CuO nanowires located near the top of the valence band. Intensity measurements indicate an upper fluence threshold of 40 μJ/cm2 where carrier relaxation is mainly governed by the hole dynamics. The fast relaxation of the photo-generated carriers was determined to follow a double exponential decay with time constants of 0.4 ps and 2.1 ps. Furthermore, time-correlated single photon counting measurements provide evidence of three exponential relaxation channels on the nanosecond timescale. PMID:22151927
Scalable synthesis of sequence-defined, unimolecular macromolecules by Flow-IEG
Leibfarth, Frank A.; Johnson, Jeremiah A.; Jamison, Timothy F.
2015-01-01
We report a semiautomated synthesis of sequence and architecturally defined, unimolecular macromolecules through a marriage of multistep flow synthesis and iterative exponential growth (Flow-IEG). The Flow-IEG system performs three reactions and an in-line purification in a total residence time of under 10 min, effectively doubling the molecular weight of an oligomeric species in an uninterrupted reaction sequence. Further iterations using the Flow-IEG system enable an exponential increase in molecular weight. Incorporating a variety of monomer structures and branching units provides control over polymer sequence and architecture. The synthesis of a uniform macromolecule with a molecular weight of 4,023 g/mol is demonstrated. The user-friendly nature, scalability, and modularity of Flow-IEG provide a general strategy for the automated synthesis of sequence-defined, unimolecular macromolecules. Flow-IEG is thus an enabling tool for theory validation, structure–property studies, and advanced applications in biotechnology and materials science. PMID:26269573
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement
Gustman, Alan L.; Steinmeier, Thomas L.
2012-01-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.
Gustman, Alan L; Steinmeier, Thomas L
2012-06-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.
NASA Astrophysics Data System (ADS)
Ernazarov, K. K.
2017-12-01
We consider a (m + 2)-dimensional Einstein-Gauss-Bonnet (EGB) model with the cosmological Λ-term. We restrict the metrics to be diagonal ones and find for certain Λ = Λ(m) class of cosmological solutions with non-exponential time dependence of two scale factors of dimensions m > 2 and 1. Any solution from this class describes an accelerated expansion of m-dimensional subspace and tends asymptotically to isotropic solution with exponential dependence of scale factors.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
A decades-long fast-rise-exponential-decay flare in low-luminosity AGN NGC 7213
NASA Astrophysics Data System (ADS)
Yan, Zhen; Xie, Fu-Guo
2018-03-01
We analysed the four-decades-long X-ray light curve of the low-luminosity active galactic nucleus (LLAGN) NGC 7213 and discovered a fast-rise-exponential-decay (FRED) pattern, i.e. the X-ray luminosity increased by a factor of ≈4 within 200 d, and then decreased exponentially with an e-folding time ≈8116 d (≈22.2 yr). For the theoretical understanding of the observations, we examined three variability models proposed in the literature: the thermal-viscous disc instability model, the radiation pressure instability model, and the TDE model. We find that a delayed tidal disruption of a main-sequence star is most favourable; either the thermal-viscous disc instability model or radiation pressure instability model fails to explain some key properties observed, thus we argue them unlikely.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
CMB constraints on β-exponential inflationary models
NASA Astrophysics Data System (ADS)
Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.
2018-03-01
We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Development and growth of fruit bodies and crops of the button mushroom, Agaricus bisporus.
Straatsma, Gerben; Sonnenberg, Anton S M; van Griensven, Leo J L D
2013-10-01
We studied the appearance of fruit body primordia, the growth of individual fruit bodies and the development of the consecutive flushes of the crop. Relative growth, measured as cap expansion, was not constant. It started extremely rapidly, and slowed down to an exponential rate with diameter doubling of 1.7 d until fruit bodies showed maturation by veil breaking. Initially many outgrowing primordia were arrested, indicating nutritional competition. After reaching 10 mm diameter, no growth arrest occurred; all growing individuals, whether relatively large or small, showed an exponential increase of both cap diameter and biomass, until veil breaking. Biomass doubled in 0.8 d. Exponential growth indicates the absence of competition. Apparently there exist differential nutritional requirements for early growth and for later, continuing growth. Flushing was studied applying different picking sizes. An ordinary flushing pattern occurred at an immature picking size of 8 mm diameter (picking mushrooms once a day with a diameter above 8 mm). The smallest picking size yielded the highest number of mushrooms picked, confirming the competition and arrested growth of outgrowing primordia: competition seems less if outgrowing primordia are removed early. The flush duration (i.e. between the first and last picking moments) was not affected by picking size. At small picking size, the subsequent flushes were not fully separated in time but overlapped. Within 2 d after picking the first individuals of the first flush, primordia for the second flush started outgrowth. Our work supports the view that the acquisition of nutrients by the mycelium is demand rather than supply driven. For formation and early outgrowth of primordia, indications were found for an alternation of local and global control, at least in the casing layer. All these data combined, we postulate that flushing is the consequence of the depletion of some unknown specific nutrition required by outgrowing primordia. Copyright © 2013 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.
Inflationary Quasiparticle Creation and Thermalization Dynamics in Coupled Bose-Einstein Condensates
NASA Astrophysics Data System (ADS)
Posazhennikova, Anna; Trujillo-Martinez, Mauricio; Kroha, Johann
2016-06-01
A Bose gas in a double-well potential, exhibiting a true Bose-Einstein condensate (BEC) amplitude and initially performing Josephson oscillations, is a prototype of an isolated, nonequilibrium many-body system. We investigate the quasiparticle (QP) creation and thermalization dynamics of this system by solving the time-dependent Keldysh-Bogoliubov equations. We find avalanchelike QP creation due to a parametric resonance between BEC and QP oscillations, followed by slow, exponential relaxation to a thermal state at an elevated temperature, controlled by the initial excitation energy of the oscillating BEC above its ground state. The crossover between the two regimes occurs because of an effective decoupling of the QP and BEC oscillations. This dynamics is analogous to elementary particle creation in models of the early universe. The thermalization in our setup occurs because the BEC acts as a grand canonical reservoir for the quasiparticle system.
Exchange bias training relaxation in spin glass/ferromagnet bilayers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Xiaodan; Du, An; Rui, Wenbin
2016-04-25
A canonical spin glass (SG) FeAu layer is fabricated to couple to a soft ferromagnet (FM) FeNi layer. Below the SG freezing temperature, exchange bias (EB) and training are observed. Training in SG/FM bilayers is insensitive to cooling field and may suppress the EB or change the sign of the EB field from negative to positive at specific temperatures, violating from the simple power-law or the single exponential function derived from the antiferromagnet based systems. In view of the SG nature, we employ a double decay model to distinguish the contributions from the SG bulk and the SG/FM interface tomore » training. Dynamical properties during training under different cooling fields and at different temperatures are discussed, and the nonzero shifting coefficient in the time index as a signature of slowing-down decay for SG based systems is interpreted by means of a modified Monte Carlo Metropolis algorithm.« less
Isotope effect on blob-statistics in gyrofluid simulations of scrape-off layer turbulence
NASA Astrophysics Data System (ADS)
Meyer, O. H. H.; Kendl, A.
2017-12-01
In this contribution we apply a recently established stochastic model for scrape-off layer fluctuations to long time series obtained from gyrofluid simulations of fusion edge plasma turbulence. Characteristic parameters are estimated for different fusion relevant isotopic compositions (protium, deuterium, tritium and singly charged helium) by means of conditional averaging. It is shown that large amplitude fluctuations associated with radially propagating filaments in the scrape-off layer feature double-exponential wave-forms. We find increased pulse duration and longer waiting times between peaks for heavier ions, while the amplitudes are similar. The associated radial blob velocity is shown to be reduced for heavier ions. A parabolic relation between skewness and kurtosis of density fluctuations seems to be present. Improved particle confinement in terms of reduced mean value close to the outermost radial boundary and blob characteristics for heavier plasmas is presented.
Spin and Flux Evolution of the New Magnetar Swift J1822.3-1606
NASA Astrophysics Data System (ADS)
Ng, Stephen C.-Y.; Kaspi, Victoria; Cumming, Andrew; Livingstone, Margaret; Scholz, Paul; Archibald, Robert
2012-07-01
On 2011 July 14, a transient X-ray source, Swift J1822.3-1606, was first detected by Swift BAT via its burst activities. It was subsequently identified as a new magnetar upon the detection of a pulse period of 8.4s. Using follow-up RXTE, Swift, and Chandra observations, we determined a spin-down rate of ~1.6e-13, giving a dipole magnetic field of ~3.8e13G, second lowest among known magnetars. The post-outburst flux evolution can be model by a double exponential decay with timescales of 11 and 58 days. We found an absorption column density similar to that of the open cluster M17 at 16' away, arguing a comparable distance of ~1.6kpc for Swift J1823.3-1606. If confirmed, this will be the nearest magnetar observed. We also discuss the possibility that the magnetar progenitor was born in M17.
Non-invasive imaging of skin cancer with fluorescence lifetime imaging using two photon tomography
NASA Astrophysics Data System (ADS)
Patalay, Rakesh; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Breunig, Hans Georg; König, Karsten; Warren, Sean; Neil, Mark A. A.; French, Paul M. W.; Chu, Anthony; Stamp, Gordon W.; Dunsby, Christopher
2011-07-01
Multispectral fluorescence lifetime imaging (FLIM) using two photon microscopy as a non-invasive technique for the diagnosis of skin lesions is described. Skin contains fluorophores including elastin, keratin, collagen, FAD and NADH. This endogenous contrast allows tissue to be imaged without the addition of exogenous agents and allows the in vivo state of cells and tissues to be studied. A modified DermaInspect® multiphoton tomography system was used to excite autofluorescence at 760 nm in vivo and on freshly excised ex vivo tissue. This instrument simultaneously acquires fluorescence lifetime images in four spectral channels between 360-655 nm using time-correlated single photon counting and can also provide hyperspectral images. The multispectral fluorescence lifetime images were spatially segmented and binned to determine lifetimes for each cell by fitting to a double exponential lifetime model. A comparative analysis between the cellular lifetimes from different diagnoses demonstrates significant diagnostic potential.
Force microscopy of layering and friction in an ionic liquid
NASA Astrophysics Data System (ADS)
Hoth, Judith; Hausen, Florian; Müser, Martin H.; Bennewitz, Roland
2014-07-01
The mechanical properties of the ionic liquid 1-butyl-1-methylpyrrolidinium tris(pentafluoroethyl) trifluorophosphate ([Py1,4][FAP]) in confinement between a SiOx and a Au(1 1 1) surface are investigated by means of atomic force microscopy (AFM) under electrochemical control. Up to 12 layers of ion pairs can be detected through force measurements while approaching the tip of the AFM to the surface. The particular shape of the force versus distance curve is explained by a model for the interaction between tip, gold surface and ionic liquid, which assumes an exponentially decaying oscillatory force originating from bulk liquid density correlations. Jumps in the tip-sample distance upon approach correspond to jumps of the compliant force sensor between branches of the oscillatory force curve. Frictional force between the laterally moving tip and the surface is detected only after partial penetration of the last double layer between tip and surface.
Posazhennikova, Anna; Trujillo-Martinez, Mauricio; Kroha, Johann
2016-06-03
A Bose gas in a double-well potential, exhibiting a true Bose-Einstein condensate (BEC) amplitude and initially performing Josephson oscillations, is a prototype of an isolated, nonequilibrium many-body system. We investigate the quasiparticle (QP) creation and thermalization dynamics of this system by solving the time-dependent Keldysh-Bogoliubov equations. We find avalanchelike QP creation due to a parametric resonance between BEC and QP oscillations, followed by slow, exponential relaxation to a thermal state at an elevated temperature, controlled by the initial excitation energy of the oscillating BEC above its ground state. The crossover between the two regimes occurs because of an effective decoupling of the QP and BEC oscillations. This dynamics is analogous to elementary particle creation in models of the early universe. The thermalization in our setup occurs because the BEC acts as a grand canonical reservoir for the quasiparticle system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; van Dam, Hubertus JJ; Pittner, Jiri
2012-03-28
The recently proposed Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] to approximate Multi-Reference Coupled Cluster (MRCC) energies can be commonly applied to any type of MRCC theory based on the Jeziorski-Monkhorst [B. Jeziorski, H.J. Monkhorst, Phys. Rev. A 24, 1668 (1981)] exponential Ansatz. In this letter we report on the performance of a simple USS correction to the Brillouin-Wigner MRCC (BW-MRCC) formalism employing single and double excitations (BW-MRCCSD). It is shown that the resulting formalism (USS-BW-MRCCSD), which uses the manifold of single and double excitations to construct the correction, can be related to a posteriorimore » corrections utilized in routine BW-MRCCSD calculations. In several benchmark calculations we compare the results of the USS-BW-MRCCSD method with results of the BW-MRCCSD approach employing a posteriori corrections and with results obtained with the Full Configuration Interaction (FCI) method.« less
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D
2014-01-01
To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.
Non-equilibrium STLS approach to transport properties of single impurity Anderson model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rezai, Raheleh, E-mail: R_Rezai@sbu.ac.ir; Ebrahimi, Farshad, E-mail: Ebrahimi@sbu.ac.ir
In this work, using the non-equilibrium Keldysh formalism, we study the effects of the electron–electron interaction and the electron-spin correlation on the non-equilibrium Kondo effect and the transport properties of the symmetric single impurity Anderson model (SIAM) at zero temperature by generalizing the self-consistent method of Singwi, Tosi, Land, and Sjolander (STLS) for a single-band tight-binding model with Hubbard type interaction to out of equilibrium steady-states. We at first determine in a self-consistent manner the non-equilibrium spin correlation function, the effective Hubbard interaction, and the double-occupancy at the impurity site. Then, using the non-equilibrium STLS spin polarization function in themore » non-equilibrium formalism of the iterative perturbation theory (IPT) of Yosida and Yamada, and Horvatic and Zlatic, we compute the spectral density, the current–voltage characteristics and the differential conductance as functions of the applied bias and the strength of on-site Hubbard interaction. We compare our spectral densities at zero bias with the results of numerical renormalization group (NRG) and depict the effects of the electron–electron interaction and electron-spin correlation at the impurity site on the aforementioned properties by comparing our numerical result with the order U{sup 2} IPT. Finally, we show that the obtained numerical results on the differential conductance have a quadratic universal scaling behavior and the resulting Kondo temperature shows an exponential behavior. -- Highlights: •We introduce for the first time the non-equilibrium method of STLS for Hubbard type models. •We determine the transport properties of SIAM using the non-equilibrium STLS method. •We compare our results with order-U2 IPT and NRG. •We show that non-equilibrium STLS, contrary to the GW and self-consistent RPA, produces the two Hubbard peaks in DOS. •We show that the method keeps the universal scaling behavior and correct exponential behavior of Kondo temperature.« less
Is a matrix exponential specification suitable for the modeling of spatial correlation structures?
Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha
2018-01-01
This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375
Measurements and Simulations of Surface Dielectric Barrier Discharges Used as Plasma Actuators
NASA Technical Reports Server (NTRS)
Hoskinson, Alan R.
2012-01-01
This report is a Ph.D. dissertation performed under NRA cooperative agreement and submitted as part of the final report. Asymmetric surface dielectric barrier discharges (DBDs) have shown promise for use as aerodynamic actuators for active flow control. In this project we studied DBD actuators experimentally and numerically. Our DBDs used a symmetric triangular high voltage waveform to generate plasma in atmospheric pressure air. Time-averaged measurements indicated that the induced force of a single barrier actuator design (one electrode insulated from the plasma) can be increased exponentially above the results of previous studies by decreasing both the length and thickness of the electrode exposed to the plasma. This increased force may allow these devices to control flow separation in a wider range of flow environments. Experiments using an intensified digital camera to examine the plasma on time scales of a few nanoseconds showed that, in addition to the previously-observed filamentary and jet-like plasma structures, discharges with very thin exposed electrodes exhibited a weak but constant plasma immediately adjacent to those electrodes. In double-barrier actuators (both electrodes insulated), decreasing the diameter of the narrower electrode lead to increasing forces, and recorded images showed the simultaneous existence of both filamentary and jet-like plasma structures. The development and application of a time-dependent, two-dimensional computational fluid plasma model has aided in understanding the detailed physics of surface DBDs at all-time scales. For simulated single-barrier discharges, the model qualitatively reproduced the filamentary and jet-like micro-discharge structures. The model was somewhat successful in reproducing the observed characteristics of double-barrier actuators. For both actuator geometries, the model indicated that the majority of the forces induced on the neutral gas occur in between micro-discharges as the plasmas decay.
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang
2016-07-01
Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.
NASA Astrophysics Data System (ADS)
Zhao, Y. L.; Piao, C. Q.; Hei, T. K.
Previous studies from this laboratory have identified a number of causally linked genes including the novel tumor suppressor Betaig-h3 that were differentially expressed in radiation induced tumorigenic BEP2D cells. To extend these studies using a genomically more stable bronchial cell line, we show here that ectopic expression of the catalytic subunit of telomerase (hTERT) in primary human small airway epithelial (SAE) cells resulted in the generation of several clonal cell lines that have been continuously in culture for more than 250 population doublings and are considered immortal. Comparably-treated control SAE cells infected with only the viral vector senesced after less than 10 population doublings. The immortalized clones demonstrated anchorage dependent growth and are non-tumorigenic in nude mice. These cells show no alteration in the p53 gene but a decrease in p16 expression. Exponentially growing SAEh cells were exposed to graded doses of 1 GeV/nucleon of 56Fe ions accelerated at the Brookhaven National Laboratory. Irradiated cells underwent gradual phenotypic alterations after extensive in vitro cultivation. Transformed cells developed through a series of successive steps before becoming anchorage independent in semisolid medium. These findings indicate that hTERT-immortalized cells, being diploid and chromosomal stable, should be a useful model in assessing mechanism of radiation carcinogenesis.
Regulation of Hemopoietic Stem Cell Turnover and Population Size in Neonatal Mice
1975-04-01
Following birth the hematopoietic stem cell population of the liver as measured by the in vivo spleen nodule assay (CFU) declines with a halving time...of about 48 hours. The stem cell population of the spleen grows exponentially with a doubling time of about 17 hours. In vitro incubation with high...single spleen colonies derived from neonatal liver and spleen CFU that both stem cell populations have a high self-renewal capacity. Thus, the decline in
Changing Mindsets to Transform Security: Leader Development for an Unpredictable and Complex World
2013-01-01
fields of phys- ical science, the amount of information is doubling every one to two years, meaning that more than half of what a college student has...beyond a review of current events or it being at a “ informational ” level. Naval War College Professor Mackubin Owens stated in 2006, that, The new... information technology in education and training underpinned by a sta- ble and experienced academic community that can support the exponential growth
Analytical solution for boundary heat fluxes from a radiating rectangular medium
NASA Technical Reports Server (NTRS)
Siegel, R.
1991-01-01
Reference is made to the work of Shah (1979) which demonstrated the possibility of partially integrating the radiative equations analytically to obtain an 'exact' solution. Shah's solution was given as a double integration of the modified Bessel function of order zero. Here, it is shown that the 'exact' solution for a rectangular region radiating to cold black walls can be conveniently derived, and expressed in simple form, by using an integral function, Sn, analogous to the exponential integral function appearing in plane-layer solutions.
Sanchez-Niño, Maria Dolores; Sanz, Ana B; Ramos, Adrian M; Fernandez-Fernandez, Beatriz; Ortiz, Alberto
2017-04-01
Exponential technologies double in power or processing speed every year, whereas their cost halves. Deception and disruption are two key stages in the development of exponential technologies. Deception occurs when, after initial introduction, technologies are dismissed as irrelevant, while they continue to progress, perhaps not as fast or with so many immediate practical applications as initially thought. Twenty years after the first publications, clinical proteomics is still not available in most hospitals and some clinicians have felt deception at unfulfilled promises. However, there are indications that clinical proteomics may be entering the disruptive phase, where, once refined, technologies disrupt established industries or procedures. In this regard, recent manuscripts in CKJ illustrate how proteomics is entering the clinical realm, with applications ranging from the identification of amyloid proteins in the pathology lab, to a new generation of urinary biomarkers for chronic kidney disease (CKD) assessment and outcome prediction. Indeed, one such panel of urinary peptidomics biomarkers, CKD273, recently received a Food and Drug Administration letter of support, the first ever in the CKD field. In addition, a must-read resource providing information on kidney disease-related proteomics and systems biology databases and how to access and use them in clinical decision-making was also recently published in CKJ .
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation
1990-05-01
process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cao, Boqiang; Zhang, Qimin; Ye, Ming
2016-11-29
We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.
2017-01-01
Cell size distribution is highly reproducible, whereas the size of individual cells often varies greatly within a tissue. This is obvious in a population of Arabidopsis thaliana leaf epidermal cells, which ranged from 1,000 to 10,000 μm2 in size. Endoreduplication is a specialized cell cycle in which nuclear genome size (ploidy) is doubled in the absence of cell division. Although epidermal cells require endoreduplication to enhance cellular expansion, the issue of whether this mechanism is sufficient for explaining cell size distribution remains unclear due to a lack of quantitative understanding linking the occurrence of endoreduplication with cell size diversity. Here, we addressed this question by quantitatively summarizing ploidy profile and cell size distribution using a simple theoretical framework. We first found that endoreduplication dynamics is a Poisson process through cellular maturation. This finding allowed us to construct a mathematical model to predict the time evolution of a ploidy profile with a single rate constant for endoreduplication occurrence in a given time. We reproduced experimentally measured ploidy profile in both wild-type leaf tissue and endoreduplication-related mutants with this analytical solution, further demonstrating the probabilistic property of endoreduplication. We next extended the mathematical model by incorporating the element that cell size is determined according to ploidy level to examine cell size distribution. This analysis revealed that cell size is exponentially enlarged 1.5 times every endoreduplication round. Because this theoretical simulation successfully recapitulated experimentally observed cell size distributions, we concluded that Poissonian endoreduplication dynamics and exponential size-boosting are the sources of the broad cell size distribution in epidermal tissue. More generally, this study contributes to a quantitative understanding whereby stochastic dynamics generate steady-state biological heterogeneity. PMID:28926847
A multilevel approach to examining cephalopod growth using Octopus pallidus as a model.
Semmens, Jayson; Doubleday, Zoë; Hoyle, Kate; Pecl, Gretta
2011-08-15
Many aspects of octopus growth dynamics are poorly understood, particularly in relation to sub-adult or adult growth, muscle fibre dynamics and repro-somatic investment. The growth of 5 month old Octopus pallidus cultured in the laboratory was investigated under three temperature regimes over a 12 week period: seasonally increasing temperatures (14-18°C); seasonally decreasing temperatures (18-14°C); and a constant temperature mid-way between seasonal peaks (16°C). Differences in somatic growth at the whole-animal level, muscle tissue structure and rate of gonad development were investigated. Continuous exponential growth was observed, both at a group and at an individual level, and there was no detectable effect of temperature on whole-animal growth rate. Juvenile growth rate (from 1 to 156 days) was also monitored prior to the controlled experiment; exponential growth was observed, but at a significantly faster rate than in the older experimental animals, suggesting that O. pallidus exhibit a double-exponential two-phase growth pattern. There was considerable variability in size-at-age even between individuals growing under identical thermal regimes. Animals exposed to seasonally decreasing temperatures exhibited a higher rate of gonad development compared with animals exposed to increasing temperatures; however, this did not coincide with a detectable decline in somatic growth rate or mantle condition. The ongoing production of new mitochondria-poor and mitochondria-rich muscle fibres (hyperplasia) was observed, indicated by a decreased or stable mean muscle fibre diameter concurrent with an increase in whole-body size. Animals from both seasonal temperature regimes demonstrated higher rates of new mitochondria-rich fibre generation relative to those from the constant temperature regime, but this difference was not reflected in a difference in growth rate at the whole-body level. This is the first study to record ongoing hyperplasia in the muscle tissue of an octopus species, and provides further insight into the complex growth dynamics of octopus.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
Confronting quasi-exponential inflation with WMAP seven
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Barun Kumar; Pal, Supratik; Basu, B., E-mail: barunp1985@rediffmail.com, E-mail: pal@th.physik.uni-bonn.de, E-mail: banasri@isical.ac.in
2012-04-01
We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK.
NASA Astrophysics Data System (ADS)
Hayat, Tanzila; Nadeem, S.
2018-03-01
This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.
Calcium Isotope Analysis with "Peak Cut" Method on Column Chemistry
NASA Astrophysics Data System (ADS)
Zhu, H.; Zhang, Z.; Liu, F.; Li, X.
2017-12-01
To eliminate isobaric interferences from elemental and molecular isobars (e.g., 40K+, 48Ti+, 88Sr2+, 24Mg16O+, 27Al16O+) on Ca isotopes during mass determination, samples should be purified through ion-exchange column chemistry before analysis. However, large Ca isotopic fractionation has been observed during column chemistry (Russell and Papanastassiou, 1978; Zhu et al., 2016). Therefore, full recovery during column chemistry is greatly needed, otherwise uncertainties would be caused by poor recovery (Zhu et al., 2016). Generally, matrix effects could be enhanced by full recovery, as other elements might overlap with Ca cut during column chemistry. Matrix effects and full recovery are difficult to balance and both need to be considered for high-precision analysis of stable Ca isotopes. Here, we investigate the influence of poor recovery on δ44/40Ca using TIMS with the double spike technique. The δ44/40Ca values of IAPSO seawater, ML3B-G and BHVO-2 in different Ca subcats (e.g., 0-20, 20-40, 40-60, 60-80, 80-100%) with 20% Ca recovery on column chemistry display limited variation after correction by the 42Ca-43Ca double spike technique with the exponential law. Notably, δ44/40Ca of each Ca subcut is quite consistent with δ44/40Ca of Ca cut with full recovery within error. Our results indicate that the 42Ca-43Ca double spike technique can simultaneously correct both of the Ca isotopic fractionation that occurred during column chemistry and thermal ionization mass spectrometry (TIMS) determination properly, because both of the isotopic fractionation occurred during analysis follow the exponential law well. Therefore, we propose the "peak cut" method on Ca column chemistry for samples with complex matrix effects. Briefly, for samples with low Ca contents, we can add the double spike before column chemistry, and only collect the middle of the Ca eluate and abandon the both sides of Ca eluate that might overlap with other elements (e.g., K, Sr). This method would eliminate matrix effects and improve efficiency for the column chemistry.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
Verification of the exponential model of body temperature decrease after death in pigs.
Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal
2005-09-01
The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.
NASA Astrophysics Data System (ADS)
Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min
2017-11-01
The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n = 31), spleens (n = 31) and kidneys (n = 31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.
2014-01-01
Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318
Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian
2014-12-01
We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Glinka, Kevin; Matthies, Michael; Theiling, Marius; Hideg, Kalman; Steinhoff, Heinz-Jürgen
2016-04-01
Sulfonamide antibiotics used in livestock farming are distributed to farmland by application of slurry as fertilizer. Previous work suggests rapid covalent binding of the aniline moiety to humic acids found in soil. In the current work, kinetics of this binding were measured in X-band EPR spectroscopy by incubating Leonardite humic acid (LHA) with a paramagnetic aniline spin label (anilino-NO (2,5,5-Trimethyl-2-(3-aminophenyl)pyrrolidin-1-oxyl)). Binding was detected by a pronounced broadening of the spectral lines after incubation of LHA with anilino-NO. The time evolution of the amplitude of this feature was used for determining the reaction kinetics. Single- and double-exponential models were fitted to the data obtained for modelling one or two first-order reactions. Reaction rates of 0.16 min-1 and 0.012 min-1, were found respectively. Addition of laccase peroxidase did not change the kinetics but significantly enhanced the reacting fraction of anilino-NO. This EPR-based method provides a technically simple and effective method for following rapid binding processes of a xenobiotic substance to humic acids.
Modeling the growth processes of polyelectrolyte multilayers using a quartz crystal resonator.
Salomäki, Mikko; Kankare, Jouko
2007-07-26
The layer-by-layer buildup of chitosan/hyaluronan (CH/HA) and poly(l-lysine)/hyaluronan (PLL/HA) multilayers was followed on a quartz crystal resonator (QCR) in different ionic strengths and at different temperatures. These polyelectrolytes were chosen to demonstrate the method whereby useful information is retrieved from acoustically thick polymer layers during their buildup. Surface acoustic impedance recorded in these measurements gives a single or double spiral when plotted in the complex plane. The shape of this spiral depends on the viscoelasticity of the layer material and regularity of the growth process. The polymer layer is assumed to consist of one or two zones. A mathematical model was devised to represent the separation of the layer to two zones with different viscoelastic properties. Viscoelastic quantities of the layer material and the mode and parameters of the growth process were acquired by fitting a spiral to the experimental data. In all the cases the growth process was mainly exponential as a function of deposition cycles, the growth exponent being between 0.250 and 0.275.
Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.
2017-01-01
Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400
Stavn, R H
1988-01-15
The role of the Lambert-Beer law in ocean optics is critically examined. The Lambert-Beer law and the three-parameter model of the submarine light field are used to construct an optical energy budget for any hydrosol. It is further applied to the analytical exponential decay coefficient of the light field and used to estimate the optical properties and effects of the dissolved/suspended component in upper ocean layers. The concepts of the empirical exponential decay coefficient (diffuse attenuation coefficient) of the light field and a constant exponential decay coefficient for molecular water are analyzed quantitatively. A constant exponential decay coefficient for water is rejected. The analytical exponential decay coefficient is used to analyze optical gradients in ocean waters.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
An approach to multivariable control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Fracture analysis of a central crack in a long cylindrical superconductor with exponential model
NASA Astrophysics Data System (ADS)
Zhao, Yu Feng; Xu, Chi
2018-05-01
The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.
Taming the runaway problem of inflationary landscapes
NASA Astrophysics Data System (ADS)
Hall, Lawrence J.; Watari, Taizan; Yanagida, T. T.
2006-05-01
A wide variety of vacua, and their cosmological realization, may provide an explanation for the apparently anthropic choices of some parameters of particle physics and cosmology. If the probability on various parameters is weighted by volume, a flat potential for slow-roll inflation is also naturally understood, since the flatter the potential the larger the volume of the subuniverse. However, such inflationary landscapes have a serious problem, predicting an environment that makes it exponentially hard for observers to exist and giving an exponentially small probability for a moderate universe like ours. A general solution to this problem is proposed, and is illustrated in the context of inflaton decay and leptogenesis, leading to an upper bound on the reheating temperature in our subuniverse. In a particular scenario of chaotic inflation and nonthermal leptogenesis, predictions can be made for the size of CP violating phases, the rate of neutrinoless double beta decay and, in the case of theories with gauge-mediated weak-scale supersymmetry, for the fundamental scale of supersymmetry breaking.
An improved cyan fluorescent protein variant useful for FRET.
Rizzo, Mark A; Springer, Gerald H; Granada, Butch; Piston, David W
2004-04-01
Many genetically encoded biosensors use Förster resonance energy transfer (FRET) between fluorescent proteins to report biochemical phenomena in living cells. Most commonly, the enhanced cyan fluorescent protein (ECFP) is used as the donor fluorophore, coupled with one of several yellow fluorescent protein (YFP) variants as the acceptor. ECFP is used despite several spectroscopic disadvantages, namely a low quantum yield, a low extinction coefficient and a fluorescence lifetime that is best fit by a double exponential. To improve the characteristics of ECFP for FRET measurements, we used a site-directed mutagenesis approach to overcome these disadvantages. The resulting variant, which we named Cerulean (ECFP/S72A/Y145A/H148D), has a greatly improved quantum yield, a higher extinction coefficient and a fluorescence lifetime that is best fit by a single exponential. Cerulean is 2.5-fold brighter than ECFP and replacement of ECFP with Cerulean substantially improves the signal-to-noise ratio of a FRET-based sensor for glucokinase activation.
{phi} meson production in Au + Au and p + p collisions at {radical}s{sub NN}=200 GeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, J.; Adler, C.; Aggarwal, M.M.
2004-06-01
We report the STAR measurement of {psi} meson production in Au + Au and p + p collisions at {radical}s{sub NN} = 200 GeV. Using the event mixing technique, the {psi} spectra and yields are obtained at midrapidity for five centrality bins in Au+Au collisions and for non-singly-diffractive p+p collisions. It is found that the {psi} transverse momentum distributions from Au+Au collisions are better fitted with a single-exponential while the p+p spectrum is better described by a double-exponential distribution. The measured nuclear modification factors indicate that {psi} production in central Au+Au collisions is suppressed relative to peripheral collisions when scaledmore » by the number of binary collisions ( versus centrality and the constant {psi}/K{sup -} ratio versus beam species, centrality, and collision energy rule out kaon coalescence as the dominant mechanism for {psi} production.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rout, Dipak; Vijaya, R.; Centre for Lasers and Photonics, Indian Institute of Technology Kanpur, Kanpur 208016
Well-ordered opaline photonic crystals are grown by inward growing self-assembly method from Rhodamine B dye-doped polystyrene colloids. Subsequent to self-assembly, the crystals are infiltrated with gold nanoparticles of 40 nm diameter. Measurements of the stopband features and photoluminescence intensity from these crystals are supplemented by fluorescence decay time analysis. The fluorescence decay times from the dye-doped photonic crystals before and after the infiltration are dramatically different from each other. A lowered fluorescence decay time was observed for the case of gold infiltrated crystal along with an enhanced emission intensity. Double-exponential decay nature of the fluorescence from the dye-doped crystal gets convertedmore » into single-exponential decay upon the infiltration of gold nanoparticles due to the resonant radiative process resulting from the overlap of the surface plasmon resonance with the emission spectrum. The influence of localized surface plasmon due to gold nanoparticles on the increase in emission intensity and decrease in decay time of the emitters is established.« less
NASA Astrophysics Data System (ADS)
Krugon, Seelam; Nagaraju, Dega
2017-05-01
This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.
Marques, Ricardo; Rodriguez-Caballero, A; Oehmen, Adrian; Pijuan, Maite
2016-08-01
Clark-Type nitrous oxide (N2O) sensors are routinely used to measure dissolved N2O concentrations in wastewater treatment plants (WWTPs), but have never before been applied to assess gas-phase N2O emissions in full-scale WWTPs. In this study, a full-scale N2O gas sensor was tested and validated for online gas measurements, and assessed with respect to its linearity, temperature dependence, signal saturation and drift prior to full-scale application. The sensor was linear at the concentrations tested (0-422.3, 0-50 and 0-10 ppmv N2O) and had a linear response up to 2750 ppmv N2O. An exponential correlation between temperature and sensor signal was described and predicted using a double exponential equation while the drift did not have a significant influence on the signal. The N2O gas sensor was used for online N2O monitoring in a full-scale sequencing batch reactor (SBR) treating domestic wastewater and results were compared with those obtained by a commercial online gas analyser. Emissions were successfully described by the sensor, being even more accurate than the values given by the commercial analyser at N2O concentrations above 500 ppmv. Data from this gas N2O sensor was also used to validate two models to predict N2O emissions from dissolved N2O measurements, one based on oxygen transfer rate and the other based on superficial velocity of the gas bubble. Using the first model, predictions for N2O emissions agreed by 98.7% with the measured by the gas sensor, while 87.0% similarity was obtained with the second model. This is the first study showing a reliable estimation of gas emissions based on dissolved N2O online data in a full-scale wastewater treatment facility. Copyright © 2016 Elsevier Ltd. All rights reserved.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
NASA Astrophysics Data System (ADS)
Ivashchuk, V. D.; Ernazarov, K. K.
2017-01-01
A (n + 1)-dimensional gravitational model with cosmological constant and Gauss-Bonnet term is studied. The ansatz with diagonal cosmological metrics is adopted and solutions with exponential dependence of scale factors: ai ˜ exp (vit), i = 1, …, n, are considered. The stability analysis of the solutions with non-static volume factor is presented. We show that the solutions with v 1 = v 2 = v 3 = H > 0 and small enough variation of the effective gravitational constant G are stable if certain restriction on (vi ) is obeyed. New examples of stable exponential solutions with zero variation of G in dimensions D = 1 + m + 2 with m > 2 are presented.
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
Two solar proton fluence models based on ground level enhancement observations
NASA Astrophysics Data System (ADS)
Raukunen, Osku; Vainio, Rami; Tylka, Allan J.; Dietrich, William F.; Jiggens, Piers; Heynderickx, Daniel; Dierckxsens, Mark; Crosby, Norma; Ganse, Urs; Siipola, Robert
2018-01-01
Solar energetic particles (SEPs) constitute an important component of the radiation environment in interplanetary space. Accurate modeling of SEP events is crucial for the mitigation of radiation hazards in spacecraft design. In this study we present two new statistical models of high energy solar proton fluences based on ground level enhancement (GLE) observations during solar cycles 19-24. As the basis of our modeling, we utilize a four parameter double power law function (known as the Band function) fits to integral GLE fluence spectra in rigidity. In the first model, the integral and differential fluences for protons with energies between 10 MeV and 1 GeV are calculated using the fits, and the distributions of the fluences at certain energies are modeled with an exponentially cut-off power law function. In the second model, we use a more advanced methodology: by investigating the distributions and relationships of the spectral fit parameters we find that they can be modeled as two independent and two dependent variables. Therefore, instead of modeling the fluences separately at different energies, we can model the shape of the fluence spectrum. We present examples of modeling results and show that the two methodologies agree well except for a short mission duration (1 year) at low confidence level. We also show that there is a reasonable agreement between our models and three well-known solar proton models (JPL, ESP and SEPEM), despite the differences in both the modeling methodologies and the data used to construct the models.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Palombo, Marco; Gabrielli, Andrea; De Santis, Silvia; Capuani, Silvia
2012-03-01
In this paper, we investigate the image contrast that characterizes anomalous and non-gaussian diffusion images obtained using the stretched exponential model. This model is based on the introduction of the γ stretched parameter, which quantifies deviation from the mono-exponential decay of diffusion signal as a function of the b-value. To date, the biophysical substrate underpinning the contrast observed in γ maps, in other words, the biophysical interpretation of the γ parameter (or the fractional order derivative in space, β parameter) is still not fully understood, although it has already been applied to investigate both animal models and human brain. Due to the ability of γ maps to reflect additional microstructural information which cannot be obtained using diffusion procedures based on gaussian diffusion, some authors propose this parameter as a measure of diffusion heterogeneity or water compartmentalization in biological tissues. Based on our recent work we suggest here that the coupling between internal and diffusion gradients provide pseudo-superdiffusion effects which are quantified by the stretching exponential parameter γ. This means that the image contrast of Mγ maps reflects local magnetic susceptibility differences (Δχ(m)), thus highlighting better than T(2)(∗) contrast the interface between compartments characterized by Δχ(m). Thanks to this characteristic, Mγ imaging may represent an interesting tool to develop contrast-enhanced MRI for molecular imaging. The spectroscopic and imaging experiments (performed in controlled micro-beads dispersion) that are reported here, strongly suggest internal gradients, and as a consequence Δχ(m), to be an important factor in fully understanding the source of contrast in anomalous diffusion methods that are based on a stretched exponential model analysis of diffusion data obtained at varying gradient strengths g. Copyright © 2012 Elsevier Inc. All rights reserved.
Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).
Namiki, C; Katsuragawa, M; Zani-Teixeira, M L
2015-04-01
The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.
1/f oscillations in a model of moth populations oriented by diffusive pheromones
NASA Astrophysics Data System (ADS)
Barbosa, L. A.; Martins, M. L.; Lima, E. R.
2005-01-01
An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
The Use of Modeling Approach for Teaching Exponential Functions
NASA Astrophysics Data System (ADS)
Nunes, L. F.; Prates, D. B.; da Silva, J. M.
2017-12-01
This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.
Some properties of the Catalan-Qi function related to the Catalan numbers.
Qi, Feng; Mahmoud, Mansour; Shi, Xiao-Ting; Liu, Fang-Fang
2016-01-01
In the paper, the authors find some properties of the Catalan numbers, the Catalan function, and the Catalan-Qi function which is a generalization of the Catalan numbers. Concretely speaking, the authors present a new expression, asymptotic expansions, integral representations, logarithmic convexity, complete monotonicity, minimality, logarithmically complete monotonicity, a generating function, and inequalities of the Catalan numbers, the Catalan function, and the Catalan-Qi function. As by-products, an exponential expansion and a double inequality for the ratio of two gamma functions are derived.
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
Discrete sudden perturbation theory for inelastic scattering. I. Quantum and semiclassical treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cross, R.J.
1985-12-01
A double perturbation theory is constructed to treat rotationally and vibrationally inelastic scattering. It uses both the elastic scattering from the spherically averaged potential and the infinite-order sudden (IOS) approximation as the unperturbed solutions. First, a standard perturbation expansion is done to express the radial wave functions in terms of the elastic wave functions. The resulting coupled equations are transformed to the discrete-variable representation where the IOS equations are diagonal. Then, the IOS solutions are removed from the equations which are solved by an exponential perturbation approximation. The results for Ar+N/sub 2/ are very much more accurate than the IOSmore » and somewhat more accurate than a straight first-order exponential perturbation theory. The theory is then converted into a semiclassical, time-dependent form by using the WKB approximation. The result is an integral of the potential times a slowly oscillating factor over the classical trajectory. A method of interpolating the result is given so that the calculation is done at the average velocity for a given transition. With this procedure, the semiclassical version of the theory is more accurate than the quantum version and very much faster. Calculations on Ar+N/sub 2/ show the theory to be much more accurate than the infinite-order sudden (IOS) approximation and the exponential time-dependent perturbation theory.« less
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
Development and evaluation of a biomedical search engine using a predicate-based vector space model.
Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey
2013-10-01
Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (p<.001) for the predicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (p<.001) and 1.34 versus 0.98 with rank order adjustment (p<.001) for predicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.
Event-driven simulations of nonlinear integrate-and-fire neurons.
Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique
2007-12-01
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.
A non-Gaussian option pricing model based on Kaniadakis exponential deformation
NASA Astrophysics Data System (ADS)
Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara
2017-09-01
A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.
Dustfall Effect on Hyperspectral Inversion of Chlorophyll Content - a Laboratory Experiment
NASA Astrophysics Data System (ADS)
Chen, Yuteng; Ma, Baodong; Li, Xuexin; Zhang, Song; Wu, Lixin
2018-04-01
Dust pollution is serious in many areas of China. It is of great significance to estimate chlorophyll content of vegetation accurately by hyperspectral remote sensing for assessing the vegetation growth status and monitoring the ecological environment in dusty areas. By using selected vegetation indices including Medium Resolution Imaging Spectrometer Terrestrial Chlorophyll Index (MTCI) Double Difference Index (DD) and Red Edge Position Index (REP), chlorophyll inversion models were built to study the accuracy of hyperspectral inversion of chlorophyll content based on a laboratory experiment. The results show that: (1) REP exponential model has the most stable accuracy for inversion of chlorophyll content in dusty environment. When dustfall amount is less than 80 g/m2, the inversion accuracy based on REP is stable with the variation of dustfall amount. When dustfall amount is greater than 80 g/m2, the inversion accuracy is slightly fluctuation. (2) Inversion accuracy of DD is worst among three models. (3) MTCI logarithm model has high inversion accuracy when dustfall amount is less than 80 g/m2; When dustfall amount is greater than 80 g/m2, inversion accuracy decreases regularly and inversion accuracy of modified MTCI (mMTCI) increases significantly. The results provide experimental basis and theoretical reference for hyperspectral remote sensing inversion of chlorophyll content.
A study of temperature-related non-linearity at the metal-silicon interface
NASA Astrophysics Data System (ADS)
Gammon, P. M.; Donchev, E.; Pérez-Tomás, A.; Shah, V. A.; Pang, J. S.; Petrov, P. K.; Jennings, M. R.; Fisher, C. A.; Mawby, P. A.; Leadley, D. R.; McN. Alford, N.
2012-12-01
In this paper, we investigate the temperature dependencies of metal-semiconductor interfaces in an effort to better reproduce the current-voltage-temperature (IVT) characteristics of any Schottky diode, regardless of homogeneity. Four silicon Schottky diodes were fabricated for this work, each displaying different degrees of inhomogeneity; a relatively homogeneous NiV/Si diode, a Ti/Si and Cr/Si diode with double bumps at only the lowest temperatures, and a Nb/Si diode displaying extensive non-linearity. The 77-300 K IVT responses are modelled using a semi-automated implementation of Tung's electron transport model, and each of the diodes are well reproduced. However, in achieving this, it is revealed that each of the three key fitting parameters within the model display a significant temperature dependency. In analysing these dependencies, we reveal how a rise in thermal energy "activates" exponentially more interfacial patches, the activation rate being dependent on the carrier concentration at the patch saddle point (the patch's maximum barrier height), which in turn is linked to the relative homogeneity of each diode. Finally, in a review of Tung's model, problems in the divergence of the current paths at low temperature are explained to be inherent due to the simplification of an interface that will contain competing defects and inhomogeneities.
NASA Astrophysics Data System (ADS)
Fox, J. B.; Thayer, D. W.; Phillips, J. G.
The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.
Estimation of renal allograft half-life: fact or fiction?
Azancot, M Antonieta; Cantarell, Carme; Perelló, Manel; Torres, Irina B; Serón, Daniel; Seron, Daniel; Moreso, Francesc; Arias, Manuel; Campistol, Josep M; Curto, Jordi; Hernandez, Domingo; Morales, José M; Sanchez-Fructuoso, Ana; Abraira, Victor
2011-09-01
Renal allograft half-life time (t½) is the most straightforward representation of long-term graft survival. Since some statistical models overestimate this parameter, we compare different approaches to evaluate t½. Patients with a 1-year functioning graft transplanted in Spain during 1990, 1994, 1998 and 2002 were included. Exponential, Weibull, gamma, lognormal and log-logistic models censoring the last year of follow-up were evaluated. The goodness of fit of these models was evaluated according to the Cox-Snell residuals and the Akaike's information criterion (AIC) was employed to compare these models. We included 4842 patients. Real t½ in 1990 was 14.2 years. Median t½ (95% confidence interval) in 1990 and 2002 was 15.8 (14.2-17.5) versus 52.6 (35.6-69.5) according to the exponential model (P < 0.001). No differences between 1990 and 2002 were observed when t½ was estimated with the other models. In 1990 and 2002, t½ was 14.0 (13.1-15.0) versus 18.0 (13.7-22.4) according to Weibull, 15.5 (13.9-17.1) versus 19.1 (15.6-22.6) according to gamma, 14.4 (13.3-15.6) versus 18.3 (14.2-22.3) according to the log-logistic and 15.2 (13.8-16.6) versus 18.8 (15.3-22.3) according to the lognormal models. The AIC confirmed that the exponential model had the lowest goodness of fit, while the other models yielded a similar result. The exponential model overestimates t½, especially in cohorts of patients with a short follow-up, while any of the other studied models allow a better estimation even in cohorts with short follow-up.
Predictive implications of Gompertz's law
NASA Astrophysics Data System (ADS)
Richmond, Peter; Roehner, Bertrand M.
2016-04-01
Gompertz's law tells us that for humans above the age of 35 the death rate increases exponentially with a doubling time of about 10 years. Here, we show that the same law continues to hold up to age 106. At that age the death rate is about 50%. Beyond 106 there is so far no convincing statistical evidence available because the number of survivors are too small even in large nations. However, assuming that Gompertz's law continues to hold beyond 106, we conclude that the mortality rate becomes equal to 1 at age 120 (meaning that there are 1000 deaths in a population of one thousand). In other words, the upper bound of human life is near 120. The existence of this fixed-point has interesting implications. It allows us to predict the form of the relationship between death rates at age 35 and the doubling time of Gompertz's law. In order to test this prediction, we first carry out a transversal analysis for a sample of countries comprising both industrialized and developing nations. As further confirmation, we also develop a longitudinal analysis using historical data over a time period of almost two centuries. Another prediction arising from this fixed-point model, is that, above a given population threshold, the lifespan of the oldest persons is independent of the size of their national community. This prediction is also supported by empirical evidence.
Effect of ethanol variation on the internal environment of sol-gel bulk and thin films with aging.
Gupta, R; Mozumdar, S; Chaudhury, N K
2005-10-15
Sol-gel derived bulk and thin films were prepared from different compositions at low pH ( approximately 2.0) containing varying concentrations of ethanol from 15 to 60% at constant water (H(2)O)/tetraethyl-orthosilicate (TEOS) ratio (R=4). The fluorescence microscopic and spectroscopic measurements on fluorescent probe, Hoechst 33258 (H258) entrapped in these compositions were carried out at different days of storage to monitor the effects of concentration of ethanol on the internal environment of sol-gel materials. Fluorescence microscopic observations on sol-gel thin films, prepared by dip coating technique depicted uniform and cracked surface at withdrawal speed 1cm/min (high speed) and 0.1cm/min (low speed) respectively, which did not change during aging. Fluorescence spectral measurements showed emission maximum of H258 at approximately 535 nm in fresh sols at all concentrations of ethanol which depicted slight blue shift to 512 nm during aging in bulk. No such spectral shift has been observed in sol-gel thin films coated at high speed whereas thin films coated at low speed clearly showed an additional band at approximately 404 nm at 45 and 60% concentration of ethanol after about one month of storage. Analysis of the fluorescence lifetime data indicated single exponential decay (1.6-1.8 ns) in fresh sol and from third day onwards, invariably double exponential decay with a short (tau(1)) and a long (tau(2)) component were observed in sol-gel bulk with a dominant tau(1) at approximately 1.2 ns at all concentrations of ethanol. A double exponential decay consisting of a short component (tau(1)) at approximately 0.2 ns and a long component (tau(2)) at approximately 3.5 ns were observed at all ethanol concentrations in both fresh and aged sol-gel thin films. Further, distribution analysis of lifetimes of H258 showed two mean lifetimes with increased width in aged bulk and thin films. These results are likely to have strong implications in designing the internal environment for applications in biosensors.
Modeling the Role of Dislocation Substructure During Class M and Exponential Creep. Revised
NASA Technical Reports Server (NTRS)
Raj, S. V.; Iskovitz, Ilana Seiden; Freed, A. D.
1995-01-01
The different substructures that form in the power-law and exponential creep regimes for single phase crystalline materials under various conditions of stress, temperature and strain are reviewed. The microstructure is correlated both qualitatively and quantitatively with power-law and exponential creep as well as with steady state and non-steady state deformation behavior. These observations suggest that creep is influenced by a complex interaction between several elements of the microstructure, such as dislocations, cells and subgrains. The stability of the creep substructure is examined in both of these creep regimes during stress and temperature change experiments. These observations are rationalized on the basis of a phenomenological model, where normal primary creep is interpreted as a series of constant structure exponential creep rate-stress relationships. The implications of this viewpoint on the magnitude of the stress exponent and steady state behavior are discussed. A theory is developed to predict the macroscopic creep behavior of a single phase material using quantitative microstructural data. In this technique the thermally activated deformation mechanisms proposed by dislocation physics are interlinked with a previously developed multiphase, three-dimensional. dislocation substructure creep model. This procedure leads to several coupled differential equations interrelating macroscopic creep plasticity with microstructural evolution.
Kartalis, Nikolaos; Manikis, Georgios C; Loizou, Louiza; Albiin, Nils; Zöllner, Frank G; Del Chiaro, Marco; Marias, Kostas; Papanikolaou, Nikolaos
2016-01-01
To compare two Gaussian diffusion-weighted MRI (DWI) models including mono-exponential and bi-exponential, with the non-Gaussian kurtosis model in patients with pancreatic ductal adenocarcinoma. After written informed consent, 15 consecutive patients with pancreatic ductal adenocarcinoma underwent free-breathing DWI (1.5T, b-values: 0, 50, 150, 200, 300, 600 and 1000 s/mm 2 ). Mean values of DWI-derived metrics ADC, D, D*, f, K and D K were calculated from multiple regions of interest in all tumours and non-tumorous parenchyma and compared. Area under the curve was determined for all metrics. Mean ADC and D K showed significant differences between tumours and non-tumorous parenchyma (both P < 0.001). Area under the curve for ADC, D, D*, f, K, and D K were 0.77, 0.52, 0.53, 0.62, 0.42, and 0.84, respectively. ADC and D K could differentiate tumours from non-tumorous parenchyma with the latter showing a higher diagnostic accuracy. Correction for kurtosis effects has the potential to increase the diagnostic accuracy of DWI in patients with pancreatic ductal adenocarcinoma.
NASA Astrophysics Data System (ADS)
Cao, Jinde; Wang, Yanyan
2010-05-01
In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
NASA Technical Reports Server (NTRS)
Botez, D.; Connolly, J. C.; Gilbert, D. B.; Ettenberg, M.
1981-01-01
The temperature dependence of threshold currents in constricted double-heterojunction diode lasers with strong lateral mode confinement is found to be significantly milder than for other types of lasers. The threshold-current relative variations with ambient temperature are typically two to three times less than for other devices of CW-operation capability. Over the interval 10-70 C the threshold currents fit the empirical exponential law exp/(T2-T1)/T0/ with T0 values in the 240-375 C range in pulsed operation, and in the 200-310 C range in CW operation. The external differential quantum efficiency and the mode far-field pattern near threshold are virtually invariant with temperature. The possible causes of high-T0 behavior are analyzed, and a new phenomenon - temperature-dependent current focusing - is presented to explain the results.
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2017-10-01
In order to obtain double encryption via elliptic curve cryptography (ECC) and chaotic synchronisation, this study presents a design methodology for neural-network (NN)-based secure communications in multiple time-delay chaotic systems. ECC is an asymmetric encryption and its strength is based on the difficulty of solving the elliptic curve discrete logarithm problem which is a much harder problem than factoring integers. Because it is much harder, we can get away with fewer bits to provide the same level of security. To enhance the strength of the cryptosystem, we conduct double encryption that combines chaotic synchronisation with ECC. According to the improved genetic algorithm, a fuzzy controller is synthesised to realise the exponential synchronisation and achieves optimal H∞ performance by minimising the disturbances attenuation level. Finally, a numerical example with simulations is given to demonstrate the effectiveness of the proposed approach.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warshaw, S I
2001-07-15
In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less
Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A
2001-09-01
We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.
PSA doubling time of prostate carcinoma managed with watchful observation alone.
Choo, R; DeBoer, G; Klotz, L; Danjoux, C; Morton, G C; Rakovitch, E; Fleshner, N; Bunting, P; Kapusta, L; Hruby, G
2001-07-01
To study prostate-specific antigen (PSA) doubling time of untreated, favorable grade, prostate carcinoma. A prospective single-arm cohort study has been in progress to assess the feasibility of a watchful observation protocol with selective delayed intervention using clinical, histologic, or PSA progression as treatment indication in untreated, localized, favorable grade prostate adenocarcinoma (T1b-T2bN0 M0, Gleason Score < or = 7, and PSA < or = 15 ng/mL). Patients are conservatively managed with watchful observation alone, as long as they do not meet the arbitrarily defined disease progression criteria. Patients are followed regularly and undergo blood tests including PSA at each visit. PSA doubling time (Td) is estimated from a linear regression of ln(PSA) on time, assuming a simple exponential growth model. As of March 2000, 134 patients have been on the study for a minimum of 12 months (median, 24; range, 12-52) and have a median frequency of PSA measurement of 7 times (range, 3-15). Median age is 70 years. Median PSA at enrollment is 6.3 (range, 0.5-14.6). The distribution of Td is as follows: <2 years, 19 patients; 2-5 years, 46; 5-10 years, 25; 10-20 years, 11; 20-50 years, 6; > 50 years, 27. The median Td is 5.1 years. In 44 patients (33%), Td is greater than 10 years. There was no correlation between Td and patient age, clinical T stage, Gleason score, or initial PSA level. Td of untreated prostate cancer varies widely. In our cohort, 33% have Td > 10 years. Td may be a useful tool to guide treatment intervention for patients managed conservatively with watchful observation alone.
Limpens, Ronald W. A. L.; van der Schaar, Hilde M.; Kumar, Darshan; Koster, Abraham J.; Snijder, Eric J.; van Kuppeveld, Frank J. M.; Bárcena, Montserrat
2011-01-01
ABSTRACT All positive-strand RNA viruses induce membrane structures in their host cells which are thought to serve as suitable microenvironments for viral RNA synthesis. The structures induced by enteroviruses, which are members of the family Picornaviridae, have so far been described as either single- or double-membrane vesicles (DMVs). Aside from the number of delimiting membranes, their exact architecture has also remained elusive due to the limitations of conventional electron microscopy. In this study, we used electron tomography (ET) to solve the three-dimensional (3-D) ultrastructure of these compartments. At different time points postinfection, coxsackievirus B3-infected cells were high-pressure frozen and freeze-substituted for ET analysis. The tomograms showed that during the exponential phase of viral RNA synthesis, closed smooth single-membrane tubules constituted the predominant virus-induced membrane structure, with a minor proportion of DMVs that were either closed or connected to the cytosol in a vase-like configuration. As infection progressed, the DMV number steadily increased, while the tubular single-membrane structures gradually disappeared. Late in infection, complex multilamellar structures, previously unreported, became apparent in the cytoplasm. Serial tomography disclosed that their basic unit is a DMV, which is enwrapped by one or multiple cisternae. ET also revealed striking intermediate structures that strongly support the conversion of single-membrane tubules into double-membrane and multilamellar structures by a process of membrane apposition, enwrapping, and fusion. Collectively, our work unravels the sequential appearance of distinct enterovirus-induced replication structures, elucidates their detailed 3-D architecture, and provides the basis for a model for their transformation during the course of infection. PMID:21972238
Onset of normal and inverse homoclinic bifurcation in a double plasma system near a plasma fireball
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitra, Vramori; Sarma, Bornali; Sarma, Arun
Plasma fireballs are generated due to a localized discharge and appear as a luminous glow with a sharp boundary, which suggests the presence of a localized electric field such as electrical sheath or double layer structure. The present work reports the observation of normal and inverse homoclinic bifurcation phenomena in plasma oscillations that are excited in the presence of fireball in a double plasma device. The controlling parameters for these observations are the ratio of target to source chamber (n{sub T}/n{sub S}) densities and applied electrode voltage. Homoclinic bifurcation is noticed in the plasma potential fluctuations as the system evolvesmore » from narrow to long time period oscillations and vice versa with the change of control parameter. The dynamical transition in plasma fireball is demonstrated by spectral analysis, recurrence quantification analysis (RQA), and statistical measures, viz., skewness and kurtosis. The increasing trend of normalized variance reflects that enhancing n{sub T}/n{sub S} induces irregularity in plasma dynamics. The exponential growth of the time period is strongly indicative of homoclinic bifurcation in the system. The gradual decrease of skewness and increase of kurtosis with the increase of n{sub T}/n{sub S} also reflect growing complexity in the system. The visual change of recurrence plot and gradual enhancement of RQA variables DET, L{sub max}, and ENT reflects the bifurcation behavior in the dynamics. The combination of RQA and spectral analysis is a clear evidence that homoclinic bifurcation occurs due to the presence of plasma fireball with different density ratios. However, inverse bifurcation takes place due to the change of fireball voltage. Some of the features observed in the experiment are consistent with a model that describes the dynamics of ionization instabilities.« less
Area law microstate entropy from criticality and spherical symmetry
NASA Astrophysics Data System (ADS)
Dvali, Gia
2018-05-01
It is often assumed that the area law of microstate entropy and the holography are intrinsic properties exclusively of the gravitational systems, such as black holes. We construct a nongravitational model that exhibits an entropy that scales as area of a sphere of one dimension less. It is represented by a nonrelativistic bosonic field living on a d -dimensional sphere of radius R and experiencing an angular-momentum-dependent attractive interaction. We show that the system possesses a quantum critical point with the emergent gapless modes. Their number is equal to the area of a d -1 -dimensional sphere of the same radius R . These gapless modes create an exponentially large number of degenerate microstates with the corresponding microstate entropy given by the area of the same d -1 -dimensional sphere. Thanks to a double-scaling limit, the counting of the entropy and of the number of the gapless modes is made exact. The phenomenon takes place for arbitrary number of dimensions and can be viewed as a version of holography.
Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce
2010-01-01
Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Exponential inflation with F (R ) gravity
NASA Astrophysics Data System (ADS)
Oikonomou, V. K.
2018-03-01
In this paper, we shall consider an exponential inflationary model in the context of vacuum F (R ) gravity. By using well-known reconstruction techniques, we shall investigate which F (R ) gravity can realize the exponential inflation scenario at leading order in terms of the scalar curvature, and we shall calculate the slow-roll indices and the corresponding observational indices, in the context of slow-roll inflation. We also provide some general formulas of the slow-roll and the corresponding observational indices in terms of the e -foldings number. In addition, for the calculation of the slow-roll and of the observational indices, we shall consider quite general formulas, for which it is not necessary for the assumption that all the slow-roll indices are much smaller than unity to hold true. Finally, we investigate the phenomenological viability of the model by comparing it with the latest Planck and BICEP2/Keck-Array observational data. As we demonstrate, the model is compatible with the current observational data for a wide range of the free parameters of the model.
Alternative definition of excitation amplitudes in multi-reference state-specific coupled cluster
NASA Astrophysics Data System (ADS)
Garniron, Yann; Giner, Emmanuel; Malrieu, Jean-Paul; Scemama, Anthony
2017-04-01
A central difficulty of state-specific Multi-Reference Coupled Cluster (MR-CC) in the multi-exponential Jeziorski-Monkhorst formalism concerns the definition of the amplitudes of the single and double excitation operators appearing in the exponential wave operators. If the reference space is a complete active space (CAS), the number of these amplitudes is larger than the number of singly and doubly excited determinants on which one may project the eigenequation, and one must impose additional conditions. The present work first defines a state-specific reference-independent operator T˜ ^ m which acting on the CAS component of the wave function |Ψ0m⟩ maximizes the overlap between (1 +T˜ ^ m ) |Ψ0m⟩ and the eigenvector of the CAS-SD (Singles and Doubles) Configuration Interaction (CI) matrix |ΨCAS-SDm⟩ . This operator may be used to generate approximate coefficients of the triples and quadruples, and a dressing of the CAS-SD CI matrix, according to the intermediate Hamiltonian formalism. The process may be iterated to convergence. As a refinement towards a strict coupled cluster formalism, one may exploit reference-independent amplitudes provided by (1 +T˜ ^ m ) |Ψ0m⟩ to define a reference-dependent operator T^ m by fitting the eigenvector of the (dressed) CAS-SD CI matrix. The two variants, which are internally uncontracted, give rather similar results. The new MR-CC version has been tested on the ground state potential energy curves of 6 molecules (up to triple-bond breaking) and two excited states. The non-parallelism error with respect to the full-CI curves is of the order of 1 mEh.
Complex Dynamic Development of Poliovirus Membranous Replication Complexes
Nair, Vinod; Hansen, Bryan T.; Hoyt, Forrest H.; Fischer, Elizabeth R.; Ehrenfeld, Ellie
2012-01-01
Replication of all positive-strand RNA viruses is intimately associated with membranes. Here we utilize electron tomography and other methods to investigate the remodeling of membranes in poliovirus-infected cells. We found that the viral replication structures previously described as “vesicles” are in fact convoluted, branching chambers with complex and dynamic morphology. They are likely to originate from cis-Golgi membranes and are represented during the early stages of infection by single-walled connecting and branching tubular compartments. These early viral organelles gradually transform into double-membrane structures by extension of membranous walls and/or collapsing of the luminal cavity of the single-membrane structures. As the double-membrane regions develop, they enclose cytoplasmic material. At this stage, a continuous membranous structure may have double- and single-walled membrane morphology at adjacent cross-sections. In the late stages of the replication cycle, the structures are represented mostly by double-membrane vesicles. Viral replication proteins, double-stranded RNA species, and actively replicating RNA are associated with both double- and single-membrane structures. However, the exponential phase of viral RNA synthesis occurs when single-membrane formations are predominant in the cell. It has been shown previously that replication complexes of some other positive-strand RNA viruses form on membrane invaginations, which result from negative membrane curvature. Our data show that the remodeling of cellular membranes in poliovirus-infected cells produces structures with positive curvature of membranes. Thus, it is likely that there is a fundamental divergence in the requirements for the supporting cellular membrane-shaping machinery among different groups of positive-strand RNA viruses. PMID:22072780
NASA Astrophysics Data System (ADS)
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Hypersurface Homogeneous Cosmological Model in Modified Theory of Gravitation
NASA Astrophysics Data System (ADS)
Katore, S. D.; Hatkar, S. P.; Baxi, R. J.
2016-12-01
We study a hypersurface homogeneous space-time in the framework of the f (R, T) theory of gravitation in the presence of a perfect fluid. Exact solutions of field equations are obtained for exponential and power law volumetric expansions. We also solve the field equations by assuming the proportionality relation between the shear scalar (σ ) and the expansion scalar (θ ). It is observed that in the exponential model, the universe approaches isotropy at large time (late universe). The investigated model is notably accelerating and expanding. The physical and geometrical properties of the investigated model are also discussed.
Performance and state-space analyses of systems using Petri nets
NASA Technical Reports Server (NTRS)
Watson, James Francis, III
1992-01-01
The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.
Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei
2012-06-01
To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.
2017-01-01
Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161
Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos
2017-01-01
The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.
Muñoz-Cuevas, Marina; Fernández, Pablo S; George, Susan; Pin, Carmen
2010-05-01
The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (a(w)) values. To model the duration of the lag phase, the dependence of the parameter h(0), which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or a(w) were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scalar field and time varying cosmological constant in f(R,T) gravity for Bianchi type-I universe
NASA Astrophysics Data System (ADS)
Singh, G. P.; Bishi, Binaya K.; Sahoo, P. K.
2016-04-01
In this article, we have analysed the behaviour of scalar field and cosmological constant in $f(R,T)$ theory of gravity. Here, we have considered the simplest form of $f(R,T)$ i.e. $f(R,T)=R+2f(T)$, where $R$ is the Ricci scalar and $T$ is the trace of the energy momentum tensor and explored the spatially homogeneous and anisotropic Locally Rotationally Symmetric (LRS) Bianchi type-I cosmological model. It is assumed that the Universe is filled with two non-interacting matter sources namely scalar field (normal or phantom) with scalar potential and matter contribution due to $f(R,T)$ action. We have discussed two cosmological models according to power law and exponential law of the volume expansion along with constant and exponential scalar potential as sub models. Power law models are compatible with normal (quintessence) and phantom scalar field whereas exponential volume expansion models are compatible with only normal (quintessence) scalar field. The values of cosmological constant in our models are in agreement with the observational results. Finally, we have discussed some physical and kinematical properties of both the models.
Modeling the reversible, diffusive sink effect in response to transient contaminant sources.
Zhao, D; Little, J C; Hodgson, A T
2002-09-01
A physically based diffusion model is used to evaluate the sink effect of diffusion-controlled indoor materials and to predict the transient contaminant concentration in indoor air in response to several time-varying contaminant sources. For simplicity, it is assumed the predominant indoor material is a homogeneous slab, initially free of contaminant, and the air within the room is well mixed. The model enables transient volatile organic compound (VOC) concentrations to be predicted based on the material/air partition coefficient (K) and the material-phase diffusion coefficient (D) of the sink. Model predictions are made for three scenarios, each mimicking a realistic situation in a building. Styrene, phenol, and naphthalene are used as representative VOCs. A styrene butadiene rubber (SBR) backed carpet, vinyl flooring (VF), and a polyurethane foam (PUF) carpet cushion are considered as typical indoor sinks. In scenarios involving a sinusoidal VOC input and a double exponential decaying input, the model predicts the sink has a modest impact for SBR/styrene, but the effect increases for VF/phenol and PUF/naphthalene. In contrast, for an episodic chemical spill, SBR is predicted to reduce the peak styrene concentration considerably. A parametric study reveals for systems involving a large equilibrium constant (K), the kinetic constant (D) will govern the shape of the resulting gasphase concentration profile. On the other hand, for systems with a relaxed mass transfer resistance, K will dominate the profile.
Saito, Hiroshi; Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato
2014-01-01
The decision making behaviors of humans and animals adapt and then satisfy an "operant matching law" in certain type of tasks. This was first pointed out by Herrnstein in his foraging experiments on pigeons. The matching law has been one landmark for elucidating the underlying processes of decision making and its learning in the brain. An interesting question is whether decisions are made deterministically or probabilistically. Conventional learning models of the matching law are based on the latter idea; they assume that subjects learn choice probabilities of respective alternatives and decide stochastically with the probabilities. However, it is unknown whether the matching law can be accounted for by a deterministic strategy or not. To answer this question, we propose several deterministic Bayesian decision making models that have certain incorrect beliefs about an environment. We claim that a simple model produces behavior satisfying the matching law in static settings of a foraging task but not in dynamic settings. We found that the model that has a belief that the environment is volatile works well in the dynamic foraging task and exhibits undermatching, which is a slight deviation from the matching law observed in many experiments. This model also demonstrates the double-exponential reward history dependency of a choice and a heavier-tailed run-length distribution, as has recently been reported in experiments on monkeys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A; Sandison, G; Schwartz, J
Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less
NASA Astrophysics Data System (ADS)
Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.
2012-11-01
We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.
Looking for Connections between Linear and Exponential Functions
ERIC Educational Resources Information Center
Lo, Jane-Jane; Kratky, James L.
2012-01-01
Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…
A Parametric Model for Barred Equilibrium Beach Profiles
2014-05-10
to shallow water. Bodge (1992) and Komar and McDougal (1994) suggested an exponential form as a preferred solution that exhibited finite slope at the...applications. J. Coast. Res. 7, 53–84. Komar, P.D., McDougal ,W.G., 1994. The analysis of beach profiles and nearshore processes using the exponential beach
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Local perturbations perturb—exponentially-locally
NASA Astrophysics Data System (ADS)
De Roeck, W.; Schütz, M.
2015-06-01
We elaborate on the principle that for gapped quantum spin systems with local interaction, "local perturbations [in the Hamiltonian] perturb locally [the groundstate]." This principle was established by Bachmann et al. [Commun. Math. Phys. 309, 835-871 (2012)], relying on the "spectral flow technique" or "quasi-adiabatic continuation" [M. B. Hastings, Phys. Rev. B 69, 104431 (2004)] to obtain locality estimates with sub-exponential decay in the distance to the spatial support of the perturbation. We use ideas of Hamza et al. [J. Math. Phys. 50, 095213 (2009)] to obtain similarly a transformation between gapped eigenvectors and their perturbations that is local with exponential decay. This allows to improve locality bounds on the effect of perturbations on the low lying states in certain gapped models with a unique "bulk ground state" or "topological quantum order." We also give some estimate on the exponential decay of correlations in models with impurities where some relevant correlations decay faster than one would naively infer from the global gap of the system, as one also expects in disordered systems with a localized groundstate.
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
The size distribution of Pacific Seamounts
NASA Astrophysics Data System (ADS)
Smith, Deborah K.; Jordan, Thomas H.
1987-11-01
An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.
Constant growth rate can be supported by decreasing energy flux and increasing aerobic glycolysis.
Slavov, Nikolai; Budnik, Bogdan A; Schwab, David; Airoldi, Edoardo M; van Oudenaarden, Alexander
2014-05-08
Fermenting glucose in the presence of enough oxygen to support respiration, known as aerobic glycolysis, is believed to maximize growth rate. We observed increasing aerobic glycolysis during exponential growth, suggesting additional physiological roles for aerobic glycolysis. We investigated such roles in yeast batch cultures by quantifying O2 consumption, CO2 production, amino acids, mRNAs, proteins, posttranslational modifications, and stress sensitivity in the course of nine doublings at constant rate. During this course, the cells support a constant biomass-production rate with decreasing rates of respiration and ATP production but also decrease their stress resistance. As the respiration rate decreases, so do the levels of enzymes catalyzing rate-determining reactions of the tricarboxylic-acid cycle (providing NADH for respiration) and of mitochondrial folate-mediated NADPH production (required for oxidative defense). The findings demonstrate that exponential growth can represent not a single metabolic/physiological state but a continuum of changing states and that aerobic glycolysis can reduce the energy demands associated with respiratory metabolism and stress survival. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Finite-time containment control of perturbed multi-agent systems based on sliding-mode control
NASA Astrophysics Data System (ADS)
Yu, Di; Ji, Xiang Yang
2018-01-01
Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.
Picosecond absorption anisotropy of polymethine and squarylium dyes in liquid and polymeric media
NASA Astrophysics Data System (ADS)
Przhonska, Olga V.; Hagan, David J.; Novikov, Evgueni; Lepkowicz, Richard; Van Stryland, Eric W.; Bondar, Mikhail V.; Slominsky, Yuriy L.; Kachkovski, Alexei D.
2001-11-01
Time-resolved excitation-probe polarization measurements are performed for polymethine and squarylium dyes in ethanol and an elastopolymer of polyurethane acrylate (PUA). These molecules exhibit strong excited-state absorption in the visible, which results in reverse saturable absorption (RSA). In pump-probe experiments, we observe a strong angular dependence of the RSA decay kinetics upon variation of the angle between pump and probe polarizations. The difference in absorption anisotropy kinetics in ethanol and PUA is detected and analyzed. Anisotropy decay curves in ethanol follow a single exponential decay leading to complete depolarization of the excited state. We also observe complete depolarization in PUA, in which case the anisotropy decay follows a double exponential behavior. Possible rotations in the PUA polymeric matrix are connected with the existence of local microcavities of free volume. We believe that the fast decay component is connected with the rotation of molecular fragments and the slower decay component is connected with the rotation of entire molecules in local microcavities, which is possible because of the elasticity of the polymeric material.
Jane, Nancy Yesudhas; Nehemiah, Khanna Harichandran; Arputharaj, Kannan
2016-01-01
Clinical time-series data acquired from electronic health records (EHR) are liable to temporal complexities such as irregular observations, missing values and time constrained attributes that make the knowledge discovery process challenging. This paper presents a temporal rough set induced neuro-fuzzy (TRiNF) mining framework that handles these complexities and builds an effective clinical decision-making system. TRiNF provides two functionalities namely temporal data acquisition (TDA) and temporal classification. In TDA, a time-series forecasting model is constructed by adopting an improved double exponential smoothing method. The forecasting model is used in missing value imputation and temporal pattern extraction. The relevant attributes are selected using a temporal pattern based rough set approach. In temporal classification, a classification model is built with the selected attributes using a temporal pattern induced neuro-fuzzy classifier. For experimentation, this work uses two clinical time series dataset of hepatitis and thrombosis patients. The experimental result shows that with the proposed TRiNF framework, there is a significant reduction in the error rate, thereby obtaining the classification accuracy on an average of 92.59% for hepatitis and 91.69% for thrombosis dataset. The obtained classification results prove the efficiency of the proposed framework in terms of its improved classification accuracy.
Universality in the distance between two teams in a football tournament
NASA Astrophysics Data System (ADS)
da Silva, Roberto; Dahmen, Silvio R.
2014-03-01
Is football (soccer) a universal sport? Beyond the question of geographical distribution, where the answer is most certainly yes, when looked at from a mathematical viewpoint the scoring process during a match can be thought of, in a first approximation, as being modeled by a Poisson distribution. Recently, it was shown that the scoring of real tournaments can be reproduced by means of an agent-based model (da Silva et al. (2013) [24]) based on two simple hypotheses: (i) the ability of a team to win a match is given by the rate of a Poisson distribution that governs its scoring during a match; and (ii) such ability evolves over time according to results of previous matches. In this article we are interested in the question of whether the time series represented by the scores of teams have universal properties. For this purpose we define a distance between two teams as the square root of the sum of squares of the score differences between teams over all rounds in a double-round-robin-system and study how this distance evolves over time. Our results suggest a universal distance distribution of tournaments of different major leagues which is better characterized by an exponentially modified Gaussian (EMG). This result is corroborated by our agent-based model.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Exponential gain of randomness certified by quantum contextuality
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan
2017-04-01
We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.
Doubly differential cross sections for galactic heavy-ion fragmentation
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Norbury, John W.; Khandelwal, Govind S.; Townsend, Lawrence W.
1987-01-01
An abrasion-ablation T-matrix formulation is applied to the calculation of double differential-cross sections in projectile fragmentation of 2.1 GeV/nucleon O-16 on Be-9 and 86 MeV/nucleon C-12 on C-12 and Ag-108. An exponential parameterization of the ablation T-matrix is used and the total width of the intermediate states is taken as a parameter. Fitted values of the total width to experimental results are used to predict the lifetime of the ablation stage and indicate a decay time on the order of 10 to the -19th power sec.
NASA Astrophysics Data System (ADS)
Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.
2017-12-01
Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Zhao, Kaihong
2018-12-01
In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.
A mechanical model of bacteriophage DNA ejection
NASA Astrophysics Data System (ADS)
Arun, Rahul; Ghosal, Sandip
2017-08-01
Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.
A new approach to the extraction of single exponential diode model parameters
NASA Astrophysics Data System (ADS)
Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.
2018-06-01
A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.
ERIC Educational Resources Information Center
Casstevens, Thomas W.; And Others
This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
NASA Astrophysics Data System (ADS)
Song, Juan; Tao, Wenjun; Song, Hui; Gong, Min; Ma, Guohong; Dai, Ye; Zhao, Quanzhong; Qiu, Jianrong
2016-04-01
In this paper, a time-delay-adjustable double-pulse train with 800-nm wavelength, 200-fs pulse duration and a repetition rate of 1 kHz, produced by a collinear two-beam optical system like a Mach-Zehnder interferometer, was employed for irradiation of 6H-SiC crystal. The dependence of the induced structures on time delay of double-pulse train for parallel-polarization configuration was studied. The results show that as the time delay of collinear parallel-polarization dual-pulse train increased, the induced near-subwavelength ripples (NSWRs) turn from irregular rippled pattern to regularly periodic pattern and have their grooves much deepened. The characteristics timescale for this transition is about 6.24 ps. Besides, the areas of NSWR were found to decay exponentially for time delay from 0 to 1.24 ps and then slowly increase for time delay from 1.24 to 14.24 ps. Analysis shows that multiphoton ionization effect, grating-assisted surface plasmon coupling effect, and timely intervene of second pulse in a certain physical stage experienced by 6H-SiC excited upon first pulse irradiation may contribute to the transition of morphology details.
Generating functions for weighted Hurwitz numbers
NASA Astrophysics Data System (ADS)
Guay-Paquet, Mathieu; Harnad, J.
2017-08-01
Double Hurwitz numbers enumerating weighted n-sheeted branched coverings of the Riemann sphere or, equivalently, weighted paths in the Cayley graph of Sn generated by transpositions are determined by an associated weight generating function. A uniquely determined 1-parameter family of 2D Toda τ -functions of hypergeometric type is shown to consist of generating functions for such weighted Hurwitz numbers. Four classical cases are detailed, in which the weighting is uniform: Okounkov's double Hurwitz numbers for which the ramification is simple at all but two specified branch points; the case of Belyi curves, with three branch points, two with specified profiles; the general case, with a specified number of branch points, two with fixed profiles, the rest constrained only by the genus; and the signed enumeration case, with sign determined by the parity of the number of branch points. Using the exponentiated quantum dilogarithm function as a weight generator, three new types of weighted enumerations are introduced. These determine quantum Hurwitz numbers depending on a deformation parameter q. By suitable interpretation of q, the statistical mechanics of quantum weighted branched covers may be related to that of Bosonic gases. The standard double Hurwitz numbers are recovered in the classical limit.
Exponential Stellar Disks in Low Surface Brightness Galaxies: A Critical Test of Viscous Evolution
NASA Astrophysics Data System (ADS)
Bell, Eric F.
2002-12-01
Viscous redistribution of mass in Milky Way-type galactic disks is an appealing way of generating an exponential stellar profile over many scale lengths, almost independent of initial conditions, requiring only that the viscous timescale and star formation timescale are approximately equal. However, galaxies with solid-body rotation curves cannot undergo viscous evolution. Low surface brightness (LSB) galaxies have exponential surface brightness profiles, yet have slowly rising, nearly solid-body rotation curves. Because of this, viscous evolution may be inefficient in LSB galaxies: the exponential profiles, instead, would give important insight into initial conditions for galaxy disk formation. Using star formation laws from the literature and tuning the efficiency of viscous processes to reproduce an exponential stellar profile in Milky Way-type galaxies, I test the role of viscous evolution in LSB galaxies. Under the conservative and not unreasonable condition that LSB galaxies are gravitationally unstable for at least a part of their lives, I find that it is impossible to rule out a significant role for viscous evolution. This type of model still offers an attractive way of producing exponential disks, even in LSB galaxies with slowly rising rotation curves.
Variables and potential models for the bleaching of luminescence signals in fluvial environments
Gray, Harrison J.; Mahan, Shannon
2015-01-01
Luminescence dating of fluvial sediments rests on the assumption that sufficient sunlight is available to remove a previously obtained signal in a process deemed bleaching. However, luminescence signals obtained from sediment in the active channels of rivers often contain residual signals. This paper explores and attempts to build theoretical models for the bleaching of luminescence signals in fluvial settings. We present two models, one for sediment transported in an episodic manner, such as flood-driven washes in arid environments, and one for sediment transported in a continuous manner, such as in large continental scale rivers. The episodic flow model assumes that the majority of sediment is bleached while exposed to sunlight at the near surface between flood events and predicts a power-law decay in luminescence signal with downstream transport distance. The continuous flow model is developed by combining the Beer–Lambert law for the attenuation of light through a water column with a general-order kinetics equation to produce an equation with the form of a double negative exponential. The inflection point of this equation is compared with the sediment concentration from a Rouse profile to derive a non-dimensional number capable of assessing the likely extent of bleaching for a given set of luminescence and fluvial parameters. Although these models are theoretically based and not yet necessarily applicable to real-world fluvial systems, we introduce these ideas to stimulate discussion and encourage the development of comprehensive bleaching models with predictive power.
Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field
NASA Astrophysics Data System (ADS)
Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi
2018-02-01
We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.
Observational constraints on tachyonic chameleon dark energy model
NASA Astrophysics Data System (ADS)
Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.
2018-03-01
It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.
Cosmological models with a hybrid scale factor in an extended gravity theory
NASA Astrophysics Data System (ADS)
Mishra, B.; Tripathy, S. K.; Tarai, Sankarsan
2018-03-01
A general formalism to investigate Bianchi type V Ih universes is developed in an extended theory of gravity. A minimally coupled geometry and matter field is considered with a rescaled function of f(R,T) substituted in place of the Ricci scalar R in the geometrical action. Dynamical aspects of the models are discussed by using a hybrid scale factor (HSF) that behaves as power law in an initial epoch and as an exponential form at late epoch. The power law behavior and the exponential behavior appear as two extreme cases of the present model.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Locality of the Thomas-Fermi-von Weizsäcker Equations
NASA Astrophysics Data System (ADS)
Nazar, F. Q.; Ortner, C.
2017-06-01
We establish a pointwise stability estimate for the Thomas-Fermi-von Weiz-säcker (TFW) model, which demonstrates that a local perturbation of a nuclear arrangement results also in a local response in the electron density and electrostatic potential. The proof adapts the arguments for existence and uniqueness of solutions to the TFW equations in the thermodynamic limit by Catto et al. (The mathematical theory of thermodynamic limits: Thomas-Fermi type models. Oxford mathematical monographs. The Clarendon Press, Oxford University Press, New York, 1998). To demonstrate the utility of this combined locality and stability result we derive several consequences, including an exponential convergence rate for the thermodynamic limit, partition of total energy into exponentially localised site energies (and consequently, exponential locality of forces), and generalised and strengthened results on the charge neutrality of local defects.
A statistical approach for generating synthetic tip stress data from limited CPT soundings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basalams, M.K.
CPT tip stress data obtained from a Uranium mill tailings impoundment are treated as time series. A statistical class of models that was developed to model time series is explored to investigate its applicability in modeling the tip stress series. These models were developed by Box and Jenkins (1970) and are known as Autoregressive Moving Average (ARMA) models. This research demonstrates how to apply the ARMA models to tip stress series. Generation of synthetic tip stress series that preserve the main statistical characteristics of the measured series is also investigated. Multiple regression analysis is used to model the regional variationmore » of the ARMA model parameters as well as the regional variation of the mean and the standard deviation of the measured tip stress series. The reliability of the generated series is investigated from a geotechnical point of view as well as from a statistical point of view. Estimation of the total settlement using the measured and the generated series subjected to the same loading condition are performed. The variation of friction angle with depth of the impoundment materials is also investigated. This research shows that these series can be modeled by the Box and Jenkins ARMA models. A third degree Autoregressive model AR(3) is selected to represent these series. A theoretical double exponential density function is fitted to the AR(3) model residuals. Synthetic tip stress series are generated at nearby locations. The generated series are shown to be reliable in estimating the total settlement and the friction angle variation with depth for this particular site.« less
A demographic study of the exponential distribution applied to uneven-aged forests
Jeffrey H. Gove
2016-01-01
A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...
Exponential Potential versus Dark Matter
1993-10-15
scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
Human population and atmospheric carbon dioxide growth dynamics: Diagnostics for the future
NASA Astrophysics Data System (ADS)
Hüsler, A. D.; Sornette, D.
2014-10-01
We analyze the growth rates of human population and of atmospheric carbon dioxide by comparing the relative merits of two benchmark models, the exponential law and the finite-time-singular (FTS) power law. The later results from positive feedbacks, either direct or mediated by other dynamical variables, as shown in our presentation of a simple endogenous macroeconomic dynamical growth model describing the growth dynamics of coupled processes involving human population (labor in economic terms), capital and technology (proxies by CO2 emissions). Human population in the context of our energy intensive economies constitutes arguably the most important underlying driving variable of the content of carbon dioxide in the atmosphere. Using some of the best databases available, we perform empirical analyses confirming that the human population on Earth has been growing super-exponentially until the mid-1960s, followed by a decelerated sub-exponential growth, with a tendency to plateau at just an exponential growth in the last decade with an average growth rate of 1.0% per year. In contrast, we find that the content of carbon dioxide in the atmosphere has continued to accelerate super-exponentially until 1990, with a transition to a progressive deceleration since then, with an average growth rate of approximately 2% per year in the last decade. To go back to CO2 atmosphere contents equal to or smaller than the level of 1990 as has been the broadly advertised goals of international treaties since 1990 requires herculean changes: from a dynamical point of view, the approximately exponential growth must not only turn to negative acceleration but also negative velocity to reverse the trend.
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
Sorption isotherm characteristics of aonla flakes.
Alam, Md Shafiq; Singh, Amarjit
2011-06-01
The equilibrium moisture content was determined for un-osmosed and osmosed (salt osmosed and sugar osmosed) aonla flakes using the static method at temperatures of 25, 40,50, 60 and 70 °C over a range of relative humidities from 20 to 90%. The sorption capacity of aonla decreased with an increase in temperature at constant water activity. The sorption isotherms exhibited hysteresis, in which the equilibrium moisture content was higher at a particular equilibrium relative humidity for desorption curve than for adsorption. The hysteresis effect was more pertinent for un-osmosed and salt osmosed samples in comparison to sugar osmosed samples. Five models namely the modified Chung Pfost, modified Halsey, modified Henderson, modified Exponential and Guggenheim-Anderson-de Boer (GAB) were evaluated to determine the best fit for the experimental data. For both adsorption and desorption process of aonla fruit, the equilibrium moisture content of un-osmosed and osmosed aonla samples can be predicted well by GAB model as well as modified Exponential model. Moreover, the modified Exponential model was found to be the best for describing the sorption behaviour of un-osmosed and salt osmosed samples while, GAB model for sugar osmosed aonla samples.
Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.
2018-01-01
Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184
NASA Astrophysics Data System (ADS)
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
NASA Astrophysics Data System (ADS)
De Geyter, Gert; Baes, Maarten; Camps, Peter; Fritz, Jacopo; De Looze, Ilse; Hughes, Thomas M.; Viaene, Sébastien; Gentile, Gianfranco
2014-06-01
We investigate the amount and spatial distribution of interstellar dust in edge-on spiral galaxies, using detailed radiative transfer modelling of a homogeneous sample of 12 galaxies selected from the Calar Alto Legacy Integral Field Area survey. Our automated fitting routine, FITSKIRT, was first validated against artificial data. This is done by simultaneously reproducing the Sloan Digital Sky Survey g-, r-, i- and z-band observations of a toy model in order to combine the information present in the different bands. We show that this combined, oligochromatic fitting has clear advantages over standard monochromatic fitting especially regarding constraints on the dust properties. We model all galaxies in our sample using a three-component model, consisting of a double-exponential disc to describe the stellar and dust discs and using a Sérsic profile to describe the central bulge. The full model contains 19 free parameters, and we are able to constrain all these parameters to a satisfactory level of accuracy without human intervention or strong boundary conditions. Apart from two galaxies, the entire sample can be accurately reproduced by our model. We find that the dust disc is about 75 per cent more extended but only half as high as the stellar disc. The average face-on optical depth in the V band is 0.76 and the spread of 0.60 within our sample is quite substantial, which indicates that some spiral galaxies are relatively opaque even when seen face-on.
The interaction of hydroxyurea and ionizing radiation in human cervical carcinoma cells.
Kuo, M L; Kunugi, K A; Lindstrom, M J; Kinsella, T J
1997-01-01
The results from prior in vitro and in vivo studies and recent phase 3 clinical trials suggest a significant potential role for hydroxyurea (HU) as a clinical radiosensitizer for cervix cancer. However, a detailed study of possible cellular mechanisms of radiosensitization in human cervix cancer cells as a consequence of dose and timing of HU and ionizing radiation (IR) has not been performed. This in vitro study analyses the interactions of HU and IR in a human cervical carcinoma cell line, Caski. Exponentially growing Caski cells were continuously exposed to clinically achievable but minimally cytotoxic concentrations of HU (0.3-3.0 mM) for various time intervals (6, 12, 18, 24, and 30 hours) up to one population doubling time either prior to or immediately following IR (2-8 Gy). The radiation survival data were analyzed using our modification of the linear-quadratic model to test for an interaction (greater than additive). The effects of HU alone, IR alone, and the combination on cell cycle progression and on apoptotic cell death in exponentially growing Caski cells were measured. We report a significant HU-IR interaction (radiosensitization) based on the sequence of HU exposure (post- > pre-IR) and with increasing concentrations of HU (0.3-3.0 mM), but no effect on radiosensitization with the duration of exposure to HU for up to one cell population doubling (6-30 hours). HU concentration has a significant effect on both alpha and beta linear-quadratic values in the post-IR sequences. Exposures of exponentially growing Caski cells to 1 mM and 3 mM HU alone result in a complete block in early S phase throughout the 30-hour exposure, while 0.3 mM HU causes a transient early S-phase block over the initial 12 to 18 hours of exposure. HU alone has no effect on cell cycle progression in G1 or G2/M populations but results in a large apoptotic population (31% following 1 mM HU x 30 hours), which appears to be the principal mechanism of drug cytotoxicity in these cells. IR alone (4 or 6 Gy) results in a significant G2 delay for 6 to 18 hours following IR but no G1 delay and a small apoptotic population at 30 hours post-IR (5.4% vs 2.1% in non-IR controls). The use of HU (0.3 or 1.0 mM) following IR (4 or 6 Gy) results in a significantly larger G2 delay compared with IR alone, but with only an additive effect on the apoptotic population. These in vitro data demonstrate that radiosensitization of Caski cells is more significant with post-IR exposures to clinically achievable concentrations of HU. This HU-IR interaction is associated with an increased G2 delay, suggesting a reduction in IR damage repair. However, this interaction appears to be independent of the cytotoxicity (principally by apoptosis) from HU alone.
On the Nature and History of Blue Amorphous Galaxies
NASA Astrophysics Data System (ADS)
Marlowe, Amanda True
1998-07-01
Dwarf galaxies play an important role in our understanding of galaxy formation and evolution. We have embarked on a systematic study of 12 blue amorphous galaxies (BAGs) whose properties suggest that they are dwarf galaxies in a starburst or post-burst state. It seems likely that BAGs are related to other 'starburst' dwarfs such as blue compact dwarfs (BCDs) and HII galaxies. The BAGs in our sample, however, are considerably closer than BCDs and HII galaxies in other samples, and therefore easier to study. These galaxies may offer important insights into dwarf galaxy evolution. In an effort to clarify the role of BAGs in evolutionary scenarios for dwarf galaxies, we present and analyze Hα and UBVI data for our sample. BAGs, like BCDs and HII galaxies, have surface brightness profiles that are exponential in the outer regions but have a predominantly blue central blue excess, suggesting a young burst in an older, redder galaxy. Seven of the galaxies have the bubble or filamentary Hα morphology and double peaked emission lines that are the signature of superbubbles or superwind activity. These galaxies are typically the ones with the strongest central excesses. The starbursting regions are young events compared to the older underlying galaxy, which follow an exponential surface brightness law. Not all of the galaxies develop superwinds: the appearance of superwinds is most sensitive to the concentration and rate of star formation in the starbursting core. The underlying exponential galaxies are very similar to those found in BCDs and HII galaxies, though the 'burst' colors are slightly redder than those found in HII galaxies. BAGs are structurally similar to BCDs and HII galaxies. How BAGs fit into the dwarf galaxy evolutionary debate is less clear. While some compact dIs have properties similar to those of the underlying exponential galaxy in our sample, issues such as mass loss from superwinds, the impact of the starbursting core on the underlying galaxy, and fading complicate the search for BAG progenitor and evolved or faded BAG galaxy classes.
Yang, Shiju; Li, Chuandong; Huang, Tingwen
2016-03-01
The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of Dibenzothiophene Desulfurization in a Recombinant Pseudomonas putida Strain▿
Calzada, Javier; Zamarro, María T.; Alcón, Almudena; Santos, Victoria E.; Díaz, Eduardo; García, José L.; Garcia-Ochoa, Felix
2009-01-01
Biodesulfurization was monitored in a recombinant Pseudomonas putida CECT5279 strain. DszB desulfinase activity reached a sharp maximum at the early exponential phase, but it rapidly decreased at later growth phases. A model two-step resting-cell process combining sequentially P. putida cells from the late and early exponential growth phases was designed to significantly increase biodesulfurization. PMID:19047400
Erik A. Lilleskov
2017-01-01
Fungal respiration contributes substantially to ecosystem respiration, yet its field temperature response is poorly characterized. I hypothesized that at diurnal time scales, temperature-respiration relationships would be better described by unimodal than exponential models, and at longer time scales both Q10 and mass-specific respiration at 10 °...
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
Porto, Markus; Roman, H Eduardo
2002-04-01
We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Moxnes, John F; de Blasio, Birgitte Freiesleben; Leegaard, Truls Michael; Moen, Aina E Fossum
2013-01-01
Accurate estimates of the incidence and prevalence of methicillin-resistant Staphylococcus aureus (MRSA) infections are needed to inform public health policies. In Norway, where both MRSA infection and carriage are notifiable conditions, the reported incidence of MRSA is slowly increasing. However, the proportion of MRSA in relation to all S. aureus isolates is unknown, making it difficult to determine if the rising incidence is real or an artifact of an increasing number of tests performed. To characterize recent trends in MRSA infections and obtain a more complete understanding of the MRSA level in Norway. All reported cases of MRSA and methicillin-sensitive S. aureus (MSSA) from Oslo County (1997-2010) and Health Region East (2008-2008), representing approximately 11% and 36% of the Norwegian population, respectively, were analyzed using a stochastic time series analysis to characterize trends. In Oslo County, the proportion of methicillin-resistant cases increased from 0.73% to 3.78% during the study period and was well modeled by an exponential growth with a doubling constant of 5.7 years (95% CI 4.5-7.4 years). In Health Region East, the proportion of MRSA cases increased from 0.4% to 2.1% from 2002 to 2008, with a best-fitting linear increase of 0.26% (95% CI 0.21-0.30%) per year. In both cases, the choice of a linear or exponential model for the time trend produced only marginally different model fits. We found no significant changes due to revised national MRSA guidelines published in June 2009. Significant variations in the increasing time trend were observed in the five hospitals within the region. The yearly reported incidence of MSSA was relatively stable in both study areas although we found seasonal patterns with peaks in August. The level of MRSA is increasing in Norway, and the proportion of methicillin resistance in all S. aureus isolates are higher than the reported proportion of MRSA in invasive infections.
Moxnes, John F.; de Blasio, Birgitte Freiesleben; Leegaard, Truls Michael; Moen, Aina E. Fossum
2013-01-01
Background Accurate estimates of the incidence and prevalence of methicillin-resistant Staphylococcus aureus (MRSA) infections are needed to inform public health policies. In Norway, where both MRSA infection and carriage are notifiable conditions, the reported incidence of MRSA is slowly increasing. However, the proportion of MRSA in relation to all S. aureus isolates is unknown, making it difficult to determine if the rising incidence is real or an artifact of an increasing number of tests performed. Aim To characterize recent trends in MRSA infections and obtain a more complete understanding of the MRSA level in Norway. Methods All reported cases of MRSA and methicillin-sensitive S. aureus (MSSA) from Oslo County (1997–2010) and Health Region East (2008–2008), representing approximately 11% and 36% of the Norwegian population, respectively, were analyzed using a stochastic time series analysis to characterize trends. Results In Oslo County, the proportion of methicillin-resistant cases increased from 0.73% to 3.78% during the study period and was well modeled by an exponential growth with a doubling constant of 5.7 years (95% CI 4.5–7.4 years). In Health Region East, the proportion of MRSA cases increased from 0.4% to 2.1% from 2002 to 2008, with a best-fitting linear increase of 0.26% (95% CI 0.21–0.30%) per year. In both cases, the choice of a linear or exponential model for the time trend produced only marginally different model fits. We found no significant changes due to revised national MRSA guidelines published in June 2009. Significant variations in the increasing time trend were observed in the five hospitals within the region. The yearly reported incidence of MSSA was relatively stable in both study areas although we found seasonal patterns with peaks in August. Conclusion The level of MRSA is increasing in Norway, and the proportion of methicillin resistance in all S. aureus isolates are higher than the reported proportion of MRSA in invasive infections. PMID:23936442
Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali
2010-01-01
The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985
Dramatic changes in muscle contractile and structural properties after 2 botulinum toxin injections.
Minamoto, Viviane B; Suzuki, Kentaro P; Bremner, Shannon N; Lieber, Richard L; Ward, Samuel R
2015-10-01
Botulinum toxin is frequently administered serially to maintain therapeutic muscle paralysis, but the effect of repeated doses on muscle function are largely unknown. This study characterized the muscle response to 2 onabotulinum toxin (BoNT) injections separated by 3 months. Animal subjects received a single toxin injection (n = 8), 2 BoNT injections separated by 3 months (n = 14), or 1 BoNT and 1 saline injection separated by 3 months (n = 8). The functional effect of 2 serial injections was exponentially greater than the effect of a single injection. While both groups treated with a single BoNT injection had decreased torque in the injected leg by approximately 50% relative to contralateral legs, the double BoNT injected group had decreased torque by over 95% relative to the preinjection level. Both single and double BoNT injections produced clear signs of fiber-type grouping. These experiments demonstrate a disproportionately greater effect of repeated BoNT injections. © 2015 Wiley Periodicals, Inc.
Using phenomenological models for forecasting the 2015 Ebola challenge.
Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo
2018-03-01
The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jamaluddin, Fadhilah; Rahim, Rahela Abdul
2015-12-01
Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Britt, Keith A; Galvin, Jeffrey; Gammell, Patrick; Nti-Gyabaah, Joseph; Boras, George; Kolwyck, David; Ramirez, José G; Presente, Esther; Naugle, Gregory
2014-01-01
Simethicone emulsion is used to regulate foaming in cell culture operations in biopharmaceutical processes. It is also a potential source of endotoxin contamination. The inactivation of endotoxins in dilute simethicone emulsions was assessed as a function of time at different steam temperatures using a Limulus amebocyte lysate kinetic chromogenic technique. Endotoxin inactivation from steam-heat treatment was fit to a four-parameter double exponential decay model, which indicated that endotoxin inactivation was biphasic, consisting of fast and slow regimes. In the fast regime, temperature-related effects were dominant. Transitioning into the slow regime, the observed temperature dependence diminished, and concentration-related effects became increasingly significant. The change in the Gibbs free energy moving through the transition state indicated that a large energy barrier must be overcome for endotoxin inactivation to occur. The corresponding Arrhenius pre-exponential factor was >10(12) s(-1) suggesting that endotoxins in aqueous solution exist as aggregates. The disorder associated with the endotoxin inactivation reaction pathway was assessed via the change in entropy moving through the transition state. This quantity was positive indicating that endotoxin inactivation may result from hydrolysis of individual endotoxin molecules, which perturbs the conformation of endotoxin aggregates, thereby modulating the biological activity observed. Steam-heat treatment decreased endotoxin levels by 1-2 logarithm (log) reduction (LRV), which may be practically relevant depending on incoming raw material endotoxin levels. Antifoam efficiency and cell culture performance were negligibly impacted following steam-heat treatment. The results from this study show that steam-heat treatment is a viable endotoxin control strategy that can be implemented to support large-scale biopharmaceutical manufacturing. © 2014 American Institute of Chemical Engineers.
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.
2015-01-01
This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857
Déjardin, P
2013-08-30
The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamimura, Atsushi; Kaneko, Kunihiko
2018-03-01
Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary for exponential growth at the primitive stage of life.
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-08-01
This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.
NASA Technical Reports Server (NTRS)
1971-01-01
A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
NASA Astrophysics Data System (ADS)
Yao, Weiping; Yang, Chaohui; Jing, Jiliang
2018-05-01
From the viewpoint of holography, we study the behaviors of the entanglement entropy in insulator/superconductor transition with exponential nonlinear electrodynamics (ENE). We find that the entanglement entropy is a good probe to the properties of the holographic phase transition. Both in the half space and the belt space, the non-monotonic behavior of the entanglement entropy in superconducting phase versus the chemical potential is general in this model. Furthermore, the behavior of the entanglement entropy for the strip geometry shows that the confinement/deconfinement phase transition appears in both insulator and superconductor phases. And the critical width of the confinement/deconfinement phase transition depends on the chemical potential and the exponential coupling term. More interestingly, the behaviors of the entanglement entropy in their corresponding insulator phases are independent of the exponential coupling factor but depends on the width of the subsystem A.
Simplified formula for mean cycle-slip time of phase-locked loops with steady-state phase error.
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1972-01-01
Previous work shows that the mean time from lock to a slipped cycle of a phase-locked loop is given by a certain double integral. Accurate numerical evaluation of this formula for the second-order loop is extremely vexing because the difference between exponentially large quantities is involved. The presented article demonstrates a method in which a much-reduced precision program can be used to obtain the mean first-cycle slip time for a loop of arbitrary degree tracking at a specified SNR and steady-state phase error. It also presents a simple approximate formula that is asymptotically tight at higher loop SNR.
Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad
2018-04-21
In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.
Feasibility of protein turnover studies in prototroph Saccharomyces cerevisiae strains.
Martin-Perez, Miguel; Villén, Judit
2015-04-07
Quantitative proteomics studies of yeast that use metabolic labeling with amino acids rely on auxotrophic mutations of one or more genes on the amino acid biosynthesis pathways. These mutations affect yeast metabolism and preclude the study of some biological processes. Overcoming this limitation, it has recently been described that proteins in a yeast prototrophic strain can also be metabolically labeled with heavy amino acids. However, the temporal profiles of label incorporation under the different phases of the prototroph's growth have not been examined. Labeling trajectories are important in the study of protein turnover and dynamics, in which label incorporation into proteins is monitored across many time points. Here we monitored protein labeling trajectories for 48 h after a pulse with heavy lysine in a yeast prototrophic strain and compared them with those of a lysine auxotrophic yeast. Labeling was successful in prototroph yeast during exponential growth phase but not in stationary phase. Furthermore, we were able to determine the half-lives of more than 1700 proteins during exponential phase of growth with high accuracy and reproducibility. We found a median half-life of 2 h in both strains, which corresponds with the cellular doubling time. Nucleolar and ribosomal proteins showed short half-lives, whereas mitochondrial proteins and other energy production enzymes presented longer half-lives. Except for some proteins involved in lysine biosynthesis, we observed a high correlation in protein half-lives between prototroph and auxotroph strains. Overall, our results demonstrate the feasibility of using prototrophs for proteomic turnover studies and provide a reliable data set of protein half-lives in exponentially growing yeast.
Exponential integration algorithms applied to viscoplasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Walker, Kevin P.
1991-01-01
Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
Strategies to control a common carp population by pulsed commercial harvest
Colvin, Michael E.; Pierce, Clay; Stewart, Timothy W.; Grummer, Scott E.
2012-01-01
Commercial fisheries are commonly used to manage nuisance fishes in freshwater systems, but such efforts are often unsuccessful. Strategies for successfully controlling a nuisance population of common carp Cyprinus carpio by pulsed commercial harvest were evaluated with a combination of (1) field sampling, (2) population estimation and CPUE indexing, and (3) simulation using an exponential semidiscrete biomass dynamics model (SDBDM). The range of annual fishing mortalities (F) that resulted in successful control (F = 0.244–0.265) was narrow. Common carp biomass dynamics were sensitive to unintentional underharvest due to high rates of surplus production and a biomass doubling time of 2.7 years. Simulations indicated that biomanipulation never achieved successful control unless supplemental fishing mortality was imposed. Harvest of a majority of annual production was required to achieve successful control, as indicated by the ecotrophic coefficient (EC). Readily available biomass data and tools such as SDBDMs and ECs can be used in an adaptive management framework to successfully control common carp and other nuisance fishes by pulsed commercial fishing.
Twofold symmetries of the pure gravity action
Cheung, Clifford; Remmen, Grant N.
2017-01-25
Here, we recast the action of pure gravity into a form that is invariant under a twofold Lorentz symmetry. To derive this representation, we construct a general parameterization of all theories equivalent to the Einstein-Hilbert action up to a local field redefinition and gauge fixing. We then exploit this freedom to eliminate all interactions except those exhibiting two sets of independently contracted Lorentz indices. The resulting action is local, remarkably simple, and naturally expressed in a field basis analogous to the exponential parameterization of the nonlinear sigma model. The space of twofold Lorentz invariant field redefinitions then generates an infinitemore » class of equivalent representations. By construction, all off-shell Feynman diagrams are twofold Lorentz invariant while all on-shell tree amplitudes are automatically twofold gauge invariant. We extend our results to curved spacetime and calculate the analogue of the Einstein equations. Finally, while these twofold invariances are hidden in the canonical approach of graviton perturbation theory, they are naturally expected given the double copy relations for scattering amplitudes in gauge theory and gravity.« less
Rate of de novo mutations and the importance of father's age to disease risk.
Kong, Augustine; Frigge, Michael L; Masson, Gisli; Besenbacher, Soren; Sulem, Patrick; Magnusson, Gisli; Gudjonsson, Sigurjon A; Sigurdsson, Asgeir; Jonasdottir, Aslaug; Jonasdottir, Adalbjorg; Wong, Wendy S W; Sigurdsson, Gunnar; Walters, G Bragi; Steinberg, Stacy; Helgason, Hannes; Thorleifsson, Gudmar; Gudbjartsson, Daniel F; Helgason, Agnar; Magnusson, Olafur Th; Thorsteinsdottir, Unnur; Stefansson, Kari
2012-08-23
Mutations generate sequence diversity and provide a substrate for selection. The rate of de novo mutations is therefore of major importance to evolution. Here we conduct a study of genome-wide mutation rates by sequencing the entire genomes of 78 Icelandic parent-offspring trios at high coverage. We show that in our samples, with an average father's age of 29.7, the average de novo mutation rate is 1.20 × 10(-8) per nucleotide per generation. Most notably, the diversity in mutation rate of single nucleotide polymorphisms is dominated by the age of the father at conception of the child. The effect is an increase of about two mutations per year. An exponential model estimates paternal mutations doubling every 16.5 years. After accounting for random Poisson variation, father's age is estimated to explain nearly all of the remaining variation in the de novo mutation counts. These observations shed light on the importance of the father's age on the risk of diseases such as schizophrenia and autism.
Patalay, Rakesh; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Neil, Mark A. A.; König, Karsten; French, Paul M. W.; Chu, Anthony; Stamp, Gordon W.; Dunsby, Chris
2011-01-01
We explore the diagnostic potential of imaging endogenous fluorophores using two photon microscopy and fluorescence lifetime imaging (FLIM) in human skin with two spectral detection channels. Freshly excised benign dysplastic nevi (DN) and malignant nodular Basal Cell Carcinomas (nBCCs) were excited at 760 nm. The resulting fluorescence signal was binned manually on a cell by cell basis. This improved the reliability of fitting using a double exponential decay model and allowed the fluorescence signatures from different cell populations within the tissue to be identified and studied. We also performed a direct comparison between different diagnostic groups. A statistically significant difference between the median mean fluorescence lifetime of 2.79 ns versus 2.52 ns (blue channel, 300-500 nm) and 2.08 ns versus 1.33 ns (green channel, 500-640 nm) was found between nBCCs and DN respectively, using the Mann-Whitney U test (p < 0.01). Further differences in the distribution of fluorescence lifetime parameters and inter-patient variability are also discussed. PMID:22162820
Gutiérrez-Juárez, G; Vargas-Luna, M; Córdova, T; Varela, J B; Bernal-Alvarado, J J; Sosa, M
2002-08-01
A photoacoustic technique is used for studying topically applied substance absorption in human skin. The proposed method utilizes a double-chamber PA cell. The absorption determination was obtained through the measurement of the thermal effusivity of the binary system substance-skin. The theoretical model assumes that the effective thermal effusivity of the binary system corresponds to that of a two-phase system. Experimental applications of the method employed different substances of topical application in different parts of the body of a volunteer. The method is demonstrated to be an easily used non-invasive technique for dermatology research. The relative concentrations as a function of time of substances such as ketoconazol and sunscreen were determined by fitting a sigmoidal function to the data, while an exponential function corresponds to the best fit for the set of data for nitrofurazona, vaseline and vaporub. The time constants associated with the rates of absorption, were found to vary in the range between 10 and 58 min, depending on the substance and the part of the body.
Otevrel, Marek; Klepárník, Karel
2002-10-01
The partial differential equation describing unsteady velocity profile of electroosmotic flow (EOF) in a cylindrical capillary filled with a nonconstant viscosity electrolyte was derived. Analytical solution, based on the general Navier-Stokes equation, was found for constant viscosity electrolytes using the separation of variables (Fourier method). For the case of a nonconstant viscosity electrolyte, the steady-state velocity profile was calculated assuming that the viscosity decreases exponentially in the direction from the wall to the capillary center. Since the respective equations with nonconstant viscosity term are not solvable in general, the method of continuous binding conditions was used to solve this problem. In this method, an arbitrary viscosity profile can be modeled. The theoretical conclusions show that the relaxation times at which an EOF approaches the steady state are too short to have an impact on a separation process in any real systems. A viscous layer at the wall affects EOF significantly, if it is thicker than the Debye length of the electric double layer. The presented description of the EOF dynamics is applicable to any microfluidic systems.
Twofold symmetries of the pure gravity action
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Clifford; Remmen, Grant N.
Here, we recast the action of pure gravity into a form that is invariant under a twofold Lorentz symmetry. To derive this representation, we construct a general parameterization of all theories equivalent to the Einstein-Hilbert action up to a local field redefinition and gauge fixing. We then exploit this freedom to eliminate all interactions except those exhibiting two sets of independently contracted Lorentz indices. The resulting action is local, remarkably simple, and naturally expressed in a field basis analogous to the exponential parameterization of the nonlinear sigma model. The space of twofold Lorentz invariant field redefinitions then generates an infinitemore » class of equivalent representations. By construction, all off-shell Feynman diagrams are twofold Lorentz invariant while all on-shell tree amplitudes are automatically twofold gauge invariant. We extend our results to curved spacetime and calculate the analogue of the Einstein equations. Finally, while these twofold invariances are hidden in the canonical approach of graviton perturbation theory, they are naturally expected given the double copy relations for scattering amplitudes in gauge theory and gravity.« less
Nanoindentation creep behavior of human enamel.
He, Li-Hong; Swain, Michael V
2009-11-01
In this study, the indentation creep behavior of human enamel was investigated with a nanoindentation system and a Berkovich indenter at a force of 250 mN with one-step loading and unloading method. A constant hold period of 900 s was incorporated into each test at the maximum load as well at 5 mN minimum load during unloading. The indentation creep at the maximum load and creep recovery at the minimum load was described with a double exponential function and compared with other classic viscoelastic models (Debye/Maxwell and Kohlrausch-Williams-Watts). Indentation creep rate sensitivity, m, of human enamel was measured for the first time with a value of approximately 0.012. Enamel displayed both viscoelastic and viscoplastic behavior similar to that of bone. These results indicate that, associated with entrapment of particulates between teeth under functional loading and sliding wear conditions, the enamel may inelastically deform but recover upon its release. This behavior may be important in explaining the excellent wear resistance, antifatigue, and crack resistant abilities of natural tooth structure. (c) 2008 Wiley Periodicals, Inc.
Effects of laser radiation field on energies of hydrogen atom in plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahar, M. K., E-mail: mussiv58@gmail.com
2015-09-15
In this study, for the first time, the Schrödinger equation with more general exponential cosine screened Coulomb (MGECSC) potential is solved numerically in the presence of laser radiation field within the Ehlotzky approximation using the asymptotic iteration method. The MGECSC potential includes four different potential forms in consideration of different sets of the parameters in the potential. By applying laser field, the total interaction potential of hydrogen atom embedded in plasmas converts to double well-type potential. The plasma screening effects under the influence of laser field as well as confinement effects of laser field on hydrogen atom in Debye andmore » quantum plasmas are investigated by solving the Schrödinger equation with the laser-dressed MGECSC potential. It is resulted that since applying a monochromatic laser field on hydrogen atom embedded in a Debye and quantum plasma causes to shift in the profile of the total interaction potential, the confinement effects of laser field on hydrogen atom in plasmas modeled by the MGECSC potential change localizations of energy states.« less
NASA Astrophysics Data System (ADS)
Jaisawal, Gaurava K.; Naik, Sachindra; Chenevez, Jérôme
2018-03-01
We present the results obtained from timing and spectral studies of the newly discovered accreting X-ray binary pulsar Swift J0243.6+6124 using Nuclear Spectroscopy Telescope Array observation in 2017 October at a flux level of ˜280 mCrab. Pulsations at 9.854 23(5) s were detected in the X-ray light curves of the pulsar. Pulse profiles of the pulsar were found to be strongly energy dependent. A broad profile at lower energies was found to evolve into a double-peaked profile in ≥ 30 keV. The 3-79 keV continuum spectrum of the pulsar was well described with a negative and positive exponential cutoff or high-energy cutoff power-law models modified with a hot blackbody at ˜3 keV. An iron emission line was also detected at 6.4 keV in the source spectrum. We did not find any signature of cyclotron absorption line in our study. Results obtained from phase-resolved and time-resolved spectroscopy are discussed in the paper.
Star formation suppression and bar ages in nearby barred galaxies
NASA Astrophysics Data System (ADS)
James, P. A.; Percival, S. M.
2018-03-01
We present new spectroscopic data for 21 barred spiral galaxies, which we use to explore the effect of bars on disc star formation, and to place constraints on the characteristic lifetimes of bar episodes. The analysis centres on regions of heavily suppressed star formation activity, which we term `star formation deserts'. Long-slit optical spectroscopy is used to determine H β absorption strengths in these desert regions, and comparisons with theoretical stellar population models are used to determine the time since the last significant star formation activity, and hence the ages of the bars. We find typical ages of ˜1 Gyr, but with a broad range, much larger than would be expected from measurement errors alone, extending from ˜0.25 to >4 Gyr. Low-level residual star formation, or mixing of stars from outside the `desert' regions, could result in a doubling of these age estimates. The relatively young ages of the underlying populations coupled with the strong limits on the current star formation rule out a gradual exponential decline in activity, and hence support our assumption of an abrupt truncation event.
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.
Brette, Romain; Gerstner, Wulfram
2005-11-01
We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
Exponentiated power Lindley distribution.
Ashour, Samir K; Eltehiwy, Mahmoud A
2015-11-01
A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.
Voter model with non-Poissonian interevent intervals
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Masuda, Naoki
2011-09-01
Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
Kinetic and Stochastic Models of 1D yeast ``prions"
NASA Astrophysics Data System (ADS)
Kunes, Kay
2005-03-01
Mammalian prion proteins (PrP) are of public health interest because of mad cow and chronic wasting diseases. Yeasts have proteins, which can undergo similar reconformation and aggregation processes to PrP; yeast ``prions" are simpler to experimentally study and model. Recent in vitro studies of the SUP35 protein (1), showed long aggregates and pure exponential growth of the misfolded form. To explain this data, we have extended a previous model of aggregation kinetics along with our own stochastic approach (2). Both models assume reconformation only upon aggregation, and include aggregate fissioning and an initial nucleation barrier. We find for sufficiently small nucleation rates or seeding by small dimer concentrations that we can achieve the requisite exponential growth and long aggregates.
Pendulum Mass Affects the Measurement of Articular Friction Coefficient
Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.
2012-01-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223
Pendulum mass affects the measurement of articular friction coefficient.
Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C
2013-02-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
The multiple complex exponential model and its application to EEG analysis
NASA Astrophysics Data System (ADS)
Chen, Dao-Mu; Petzold, J.
The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.
DOUBLE POWER LAWS IN THE EVENT-INTEGRATED SOLAR ENERGETIC PARTICLE SPECTRUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Lulu; Zhang, Ming; Rassoul, Hamid K., E-mail: lzhao@fit.edu
2016-04-10
A double power law or a power law with exponential rollover at a few to tens of MeV nucleon{sup −1} of the event-integrated differential spectra has been reported in many solar energetic particle (SEP) events. The rollover energies per nucleon of different elements correlate with a particle's charge-to-mass ratio (Q/A). The probable causes are suggested as residing in shock finite lifetimes, shock finite sizes, shock geometry, and an adiabatic cooling effect. In this work, we conduct a numerical simulation to investigate a particle's transport process in the inner heliosphere. We solve the focused transport equation using a time-backward Markov stochasticmore » approach. The convection, magnetic focusing, adiabatic cooling effect, and pitch-angle scattering are included. The effects that the interplanetary turbulence imposes on the shape of the resulting SEP spectra are examined. By assuming a pure power-law differential spectrum at the Sun, a perfect double-power-law feature with a break energy ranging from 10 to 120 MeV nucleon{sup −1} is obtained at 1 au. We found that the double power law of the differential energy spectrum is a robust result of SEP interplanetary propagation. It works for many assumptions of interplanetary turbulence spectra that give various forms of momentum dependence of a particle's mean free path. The different spectral shapes in low-energy and high-energy ends are not just a transition from the convection-dominated propagation to diffusion-dominated propagation.« less
Reciprocal capacitance transients?
NASA Astrophysics Data System (ADS)
Gfroerer, Tim; Simov, Peter; Wanlass, Mark
2007-03-01
When the reverse bias across a semiconductor diode is changed, charge carriers move to accommodate the appropriate depletion thickness, producing a simultaneous change in the device capacitance. Transient capacitance measurements can reveal inhibited carrier motion due to trapping, where the depth of the trap can be evaluated using the temperature-dependent escape rate. However, when we employ this technique on a GaAs0.72P0.28 n+/p diode (which is a candidate for incorporation in multi-junction solar cells), we observe a highly non-exponential response under a broad range of experimental conditions. Double exponential functions give good fits, but lead to non-physical results. The deduced rates depend on the observation time window and fast and slow rates, which presumably correspond to deep and shallow levels, have identical activation energies. Meanwhile, we have discovered a universal linear relationship between the inverse of the capacitance and time. An Arrhenius plot of the slope of the reciprocal of the transient yields an activation energy of approximately 0.4 eV, independent of the observation window and other experimental conditions. The reciprocal behavior leads us to hypothesize that hopping, rather than escape into high-mobility bands, may govern the transport of trapped holes in this system.
Control of Growth Rate by Initial Substrate Concentration at Values Below Maximum Rate
Gaudy, Anthony F.; Obayashi, Alan; Gaudy, Elizabeth T.
1971-01-01
The hyperbolic relationship between specific growth rate, μ, and substrate concentration, proposed by Monod and used since as the basis for the theory of steady-state growth in continuous-flow systems, was tested experimentally in batch cultures. Use of a Flavobacterium sp. exhibiting a high saturation constant for growth in glucose minimal medium allowed direct measurement of growth rate and substrate concentration throughout the growth cycle in medium containing a rate-limiting initial concentration of glucose. Specific growth rates were also measured for a wide range of initial glucose concentrations. A plot of specific growth rate versus initial substrate concentration was found to fit the hyperbolic equation. However, the instantaneous relationship between specific growth rate and substrate concentration during growth, which is stated by the equation, was not observed. Well defined exponential growth phases were developed at initial substrate concentrations below that required for support of the maximum exponential growth rate and a constant doubling time was maintained until 50% of the substrate had been used. It is suggested that the external substrate concentration initially present “sets” the specific growth rate by establishing a steady-state internal concentration of substrate, possibly through control of the number of permeation sites. PMID:5137579
Exponential growth and selection in self-replicating materials from DNA origami rafts
NASA Astrophysics Data System (ADS)
He, Xiaojin; Sha, Ruojie; Zhuo, Rebecca; Mi, Yongli; Chaikin, Paul M.; Seeman, Nadrian C.
2017-10-01
Self-replication and evolution under selective pressure are inherent phenomena in life, and but few artificial systems exhibit these phenomena. We have designed a system of DNA origami rafts that exponentially replicates a seed pattern, doubling the copies in each diurnal-like cycle of temperature and ultraviolet illumination, producing more than 7 million copies in 24 cycles. We demonstrate environmental selection in growing populations by incorporating pH-sensitive binding in two subpopulations. In one species, pH-sensitive triplex DNA bonds enable parent-daughter templating, while in the second species, triplex binding inhibits the formation of duplex DNA templating. At pH 5.3, the replication rate of species I is ~1.3-1.4 times faster than that of species II. At pH 7.8, the replication rates are reversed. When mixed together in the same vial, the progeny of species I replicate preferentially at pH 7.8 similarly at pH 5.3, the progeny of species II take over the system. This addressable selectivity should be adaptable to the selection and evolution of multi-component self-replicating materials in the nanoscopic-to-microscopic size range.
Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang
NASA Astrophysics Data System (ADS)
Ikasari, D. M.; Lestari, E. R.; Prastya, E.
2018-03-01
The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.
Scattering of acoustic evanescent waves by circular cylinders: Partial wave series solution
NASA Astrophysics Data System (ADS)
Marston, Philip L.
2002-05-01
Evanescent acoustical waves occur in a variety of situations such as when sound is incident on a fluid interface beyond the critical angle and when flexural waves on a plate are subsonic with respect to the surrounding fluid. The scattering by circular cylinders at normal incidence was calculated to give insight into the consequences on the scattering of the evanescence of the incident wave. To analyze the scattering, it is necessary to express the incident wave using a modified expansion involving cylindrical functions. For plane evanescent waves, the expansion becomes a double summation with products of modified and ordinary Bessel functions. The resulting modified series is found for the scattering by a fluid cylinder in an unbounded medium. The perfectly soft and rigid cases are also examined. Unlike the case of an ordinary incident wave, the counterpropagating partial waves of the same angular order have unequal magnitudes when the incident wave is evanescent. This is a consequence of the exponential dependence of the incident wave amplitude on the transverse coordinate. The associated exponential dependence of the scattering on the location of a scatterer was previously demonstrated [T. J. Matula and P. L. Marston, J. Acoust. Soc. Am. 93, 1192-1195 (1993)].
Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin
2016-12-01
To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Infinite-disorder critical points of models with stretched exponential interactions
NASA Astrophysics Data System (ADS)
Juhász, Róbert
2014-09-01
We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.
Global exponential stability for switched memristive neural networks with time-varying delays.
Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia
2016-08-01
This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Compère, A.; Lemaître, A.; Delsate, N.
2012-01-01
Nowadays the scientific community considers that more than a third of the asteroids are double. The study of the stability of these systems is quite complex, because of their irregular shapes and tumbling rotations, and requires a full body-full body approach. A particular case is analysed here, when the secondary body is sufficiently small and distant from the primary to be considered as a point mass satellite. Gravitational resonances (between the revolution of the satellite and the rotation of the asteroid) of a small body in fast or slow rotation around a rigid ellipsoid are studied. The same model can be used for the motion of a probe around an irregular asteroid. The gravitational potential induced by the primary body is modelled by the MacMillan potential. The stability of the satellite is measured thanks to the MEGNO indicator (Mean Exponential Growth Factor of Nearby Orbits). We present stability maps in the plane {(b/d, c/d)} where d, b, and c are the three semi-axes of the ellipsoid shaping the asteroid. Special stable conic-like curves are detected on these maps and explained by an analytical model, based on a simplification of the MacMillan potential for some specific resonances (1 : 1 and 2 : 1). The efficiency of the MEGNO to detect stability is confirmed.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Exponentially growing tearing modes in Rijnhuizen Tokamak Project plasmas.
Salzedas, F; Schüller, F C; Oomens, A A M
2002-02-18
The local measurement of the island width w, around the resonant surface, allowed a direct test of the extended Rutherford model [P. H. Rutherford, PPPL Report-2277 (1985)], describing the evolution of radiation-induced tearing modes prior to disruptions of tokamak plasmas. It is found that this model accounts very well for the observed exponential growth and supports radiation losses as being the main driving mechanism. The model implies that the effective perpendicular electron heat conductivity in the island is smaller than the global one. Comparison of the local measurements of w with the magnetic perturbed field B showed that w proportional to B1/2 was valid for widths up to 18% of the minor radius.
NASA Astrophysics Data System (ADS)
Adame, J.; Warzel, S.
2015-11-01
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adame, J.; Warzel, S., E-mail: warzel@ma.tum.de
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
Disentangling the f(R)-duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broy, Benedict J.; Pedro, Francisco G.; Westphal, Alexander
2015-03-16
Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1
Disentangling the f(R)-duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broy, Benedict J.; Westphal, Alexander; Pedro, Francisco G., E-mail: benedict.broy@desy.de, E-mail: francisco.pedro@desy.de, E-mail: alexander.westphal@desy.de
2015-03-01
Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
Shoot litter breakdown and zinc dynamics of an aquatic plant, Schoenoplectus californicus.
Arreghini, Silvana; de Cabo, Laura; Serafini, Roberto José María; Fabrizio de Iorio, Alicia
2018-07-03
Decomposition of plant debris is an important process in determining the structure and function of aquatic ecosystems. The aims were to find a mathematic model fitting the decomposition process of Schoenoplectus californicus shoots containing different Zn concentrations; compare the decomposition rates; and assess metal accumulation/mobilization during decomposition. A litterbag technique was applied with shoots containing three levels of Zn: collected from an unpolluted river (RIV) and from experimental populations at low (LoZn) and high (HiZn) Zn supply. The double exponential model explained S. californicus shoot decomposition, at first, higher initial proportion of refractory fraction in RIV detritus determined a lower decay rate and until 68 days, RIV and LoZn detritus behaved like a source of metal, releasing soluble/weakly bound zinc into the water; after 68 days, they became like a sink. However, HiZn detritus showed rapid release into the water during the first 8 days, changing to the sink condition up to 68 days, and then returning to the source condition up to 369 days. The knowledge of the role of detritus (sink/source) will allow defining a correct management of the vegetation used for zinc removal and providing a valuable tool for environmental remediation and rehabilitation planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, S.; Li, Y.; Liu, C.
2015-08-15
This paper presents a statistical theory for the initial onset of multipactor breakdown in coaxial transmission lines, taking both the nonuniform electric field and random electron emission velocity into account. A general numerical method is first developed to construct the joint probability density function based on the approximate equation of the electron trajectory. The nonstationary dynamics of the multipactor process on both surfaces of coaxial lines are modelled based on the probability of various impacts and their corresponding secondary emission. The resonant assumption of the classical theory on the independent double-sided and single-sided impacts is replaced by the consideration ofmore » their interaction. As a result, the time evolutions of the electron population for exponential growth and absorption on both inner and outer conductor, in response to the applied voltage above and below the multipactor breakdown level, are obtained to investigate the exact mechanism of multipactor discharge in coaxial lines. Furthermore, the multipactor threshold predictions of the presented model are compared with experimental results using measured secondary emission yield of the tested samples which shows reasonable agreement. Finally, the detailed impact scenario reveals that single-surface multipactor is more likely to occur with a higher outer to inner conductor radius ratio.« less
Atmospheric-radiation boundary conditions for high-frequency waves in time-distance helioseismology
NASA Astrophysics Data System (ADS)
Fournier, D.; Leguèbe, M.; Hanson, C. S.; Gizon, L.; Barucq, H.; Chabassier, J.; Duruflé, M.
2017-12-01
The temporal covariance between seismic waves measured at two locations on the solar surface is the fundamental observable in time-distance helioseismology. Above the acoustic cut-off frequency ( 5.3 mHz), waves are not trapped in the solar interior and the covariance function can be used to probe the upper atmosphere. We wish to implement appropriate radiative boundary conditions for computing the propagation of high-frequency waves in the solar atmosphere. We consider recently developed and published radiative boundary conditions for atmospheres in which sound-speed is constant and density decreases exponentially with radius. We compute the cross-covariance function using a finite element method in spherical geometry and in the frequency domain. The ratio between first- and second-skip amplitudes in the time-distance diagram is used as a diagnostic to compare boundary conditions and to compare with observations. We find that a boundary condition applied 500 km above the photosphere and derived under the approximation of small angles of incidence accurately reproduces the "infinite atmosphere" solution for high-frequency waves. When the radiative boundary condition is applied 2 Mm above the photosphere, we find that the choice of atmospheric model affects the time-distance diagram. In particular, the time-distance diagram exhibits double-ridge structure when using a Vernazza Avrett Loeser atmospheric model.
CO₂ sorption kinetics of scaled-up polyethylenimine-functionalized mesoporous silica sorbent.
Al-Marri, M J; Khader, M M; Tawfik, M; Qi, G; Giannelis, E P
2015-03-31
Two CO2 solid sorbents based on polyethylenimine, PEI (M(n) ∼ 423 and 10K), impregnated into mesoporous silica (MPS) foam prepared in kilogram quantities via a scale-up process were synthesized and systematically characterized by a range of analytical and surface techniques. The mesoporous silica sorbent impregnated with lower molecular weight PEI, PEI-423/MPS, showed higher capacity toward CO2 sorption than the sorbent functionalized with the higher molecular weight PEI (PEI-10K/MPS). On the other hand, PEI-10K/MPS exhibited higher thermal stability than PEI-423/MPS. The kinetics of CO2 adsorption on both PEI/MPS fitted well with a double-exponential model. According to this model CO2 adsorption can be divided into two steps: the first is fast and is attributed to CO2 adsorption on the sorbent surface; the second is slower and can be related to the diffusion of CO2 within and between the mesoporous particles. In contrast, the desorption process obeyed first-order kinetics with activation energies of 64.3 and 140.7 kJ mol(-1) for PEI-423/MPS and PEI-10K/MPS, respectively. These studies suggest that the selection of amine is critical as it affects not only sorbent capacity and stability but also the energy penalty associated with sorbent regeneration.
Testing predictions of the quantum landscape multiverse 2: the exponential inflationary potential
NASA Astrophysics Data System (ADS)
Di Valentino, Eleonora; Mersini-Houghton, Laura
2017-03-01
The 2015 Planck data release tightened the region of the allowed inflationary models. Inflationary models with convex potentials have now been ruled out since they produce a large tensor to scalar ratio. Meanwhile the same data offers interesting hints on possible deviations from the standard picture of CMB perturbations. Here we revisit the predictions of the theory of the origin of the universe from the landscape multiverse for the case of exponential inflation, for two reasons: firstly to check the status of the anomalies associated with this theory, in the light of the recent Planck data; secondly, to search for a counterexample whereby new physics modifications may bring convex inflationary potentials, thought to have been ruled out, back into the region of potentials allowed by data. Using the exponential inflation as an example of convex potentials, we find that the answer to both tests is positive: modifications to the perturbation spectrum and to the Newtonian potential of the universe originating from the quantum entanglement, bring the exponential potential, back within the allowed region of current data; and, the series of anomalies previously predicted in this theory, is still in good agreement with current data. Hence our finding for this convex potential comes at the price of allowing for additional thermal relic particles, equivalently dark radiation, in the early universe.
NASA Astrophysics Data System (ADS)
Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.
2010-12-01
Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.
Graham, John H; Robb, Daniel T; Poe, Amy R
2012-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.
Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models
NASA Astrophysics Data System (ADS)
Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei
2016-06-01
It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.
Adaptation in the auditory midbrain of the barn owl (Tyto alba) induced by tonal double stimulation.
Singheiser, Martin; Ferger, Roland; von Campenhausen, Mark; Wagner, Hermann
2012-02-01
During hunting, the barn owl typically listens to several successive sounds as generated, for example, by rustling mice. As auditory cells exhibit adaptive coding, the earlier stimuli may influence the detection of the later stimuli. This situation was mimicked with two double-stimulus paradigms, and adaptation was investigated in neurons of the barn owl's central nucleus of the inferior colliculus. Each double-stimulus paradigm consisted of a first or reference stimulus and a second stimulus (probe). In one paradigm (second level tuning), the probe level was varied, whereas in the other paradigm (inter-stimulus interval tuning), the stimulus interval between the first and second stimulus was changed systematically. Neurons were stimulated with monaural pure tones at the best frequency, while the response was recorded extracellularly. The responses to the probe were significantly reduced when the reference stimulus and probe had the same level and the inter-stimulus interval was short. This indicated response adaptation, which could be compensated for by an increase of the probe level of 5-7 dB over the reference level, if the latter was in the lower half of the dynamic range of a neuron's rate-level function. Recovery from adaptation could be best fitted with a double exponential showing a fast (1.25 ms) and a slow (800 ms) component. These results suggest that neurons in the auditory system show dynamic coding properties to tonal double stimulation that might be relevant for faithful upstream signal propagation. Furthermore, the overall stimulus level of the masker also seems to affect the recovery capabilities of auditory neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Anomalous T2 relaxation in normal and degraded cartilage.
Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G
2016-09-01
To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Modeling the degradation kinetics of ascorbic acid.
Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R
2018-06-13
Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.
Comparative Analyses of Creep Models of a Solid Propellant
NASA Astrophysics Data System (ADS)
Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.
2018-05-01
The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.
Zhou, Jingwen; Xu, Zhenghong; Chen, Shouwen
2013-04-01
The thuringiensin abiotic degradation processes in aqueous solution under different conditions, with a pH range of 5.0-9.0 and a temperature range of 10-40°C, were systematically investigated by an exponential decay model and a radius basis function (RBF) neural network model, respectively. The half-lives of thuringiensin calculated by the exponential decay model ranged from 2.72 d to 16.19 d under the different conditions mentioned above. Furthermore, an RBF model with accuracy of 0.1 and SPREAD value 5 was employed to model the degradation processes. The results showed that the model could simulate and predict the degradation processes well. Both the half-lives and the prediction data showed that thuringiensin was an easily degradable antibiotic, which could be an important factor in the evaluation of its safety. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zuthi, Mst Fazana Rahman; Guo, Wenshan; Ngo, Huu Hao; Nghiem, Duc Long; Hai, Faisal I; Xia, Siqing; Li, Jianxin; Li, Jixiang; Liu, Yi
2017-08-01
This study aimed to develop a practical semi-empirical mathematical model of membrane fouling that accounts for cake formation on the membrane and its pore blocking as the major processes of membrane fouling. In the developed model, the concentration of mixed liquor suspended solid is used as a lumped parameter to describe the formation of cake layer including the biofilm. The new model considers the combined effect of aeration and backwash on the foulants' detachment from the membrane. New exponential coefficients are also included in the model to describe the exponential increase of transmembrane pressure that typically occurs after the initial stage of an MBR operation. The model was validated using experimental data obtained from a lab-scale aerobic sponge-submerged membrane bioreactor (MBR), and the simulation of the model agreed well with the experimental findings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Discharging dynamics in an electrolytic cell
NASA Astrophysics Data System (ADS)
Feicht, Sarah E.; Frankel, Alexandra E.; Khair, Aditya S.
2016-07-01
We analyze the dynamics of a discharging electrolytic cell comprised of a binary symmetric electrolyte between two planar, parallel blocking electrodes. When a voltage is initially applied, ions in the electrolyte migrate towards the electrodes, forming electrical double layers. After the system reaches steady state and the external current decays to zero, the applied voltage is switched off and the cell discharges, with the ions eventually returning to a uniform spatial concentration. At voltages on the order of the thermal voltage VT=kBT /q ≃25 mV, where kB is Boltzmann's constant, T is temperature, and q is the charge of a proton, experiments on surfactant-doped nonpolar fluids observe that the temporal evolution of the external current during charging and discharging is not symmetric [V. Novotny and M. A. Hopper, J. Electrochem. Soc. 126, 925 (1979), 10.1149/1.2129195; P. Kornilovitch and Y. Jeon, J. Appl. Phys. 109, 064509 (2011), 10.1063/1.3554445]. In fact, at sufficiently large voltages (several VT), the current during discharging is no longer monotonic: it displays a "reverse peak" before decaying in magnitude to zero. We analyze the dynamics of discharging by solving the Poisson-Nernst-Planck equations governing ion transport via asymptotic and numerical techniques in three regimes. First, in the "linear regime" when the applied voltage V is formally much less than VT, the charging and discharging currents are antisymmetric in time; however, the potential and charge density profiles during charging and discharging are asymmetric. The current evolution is on the R C timescale of the cell, λDL /D , where L is the width of the cell, D is the diffusivity of ions, and λD is the Debye length. Second, in the (experimentally relevant) thin-double-layer limit ɛ =λD/L ≪1 , there is a "weakly nonlinear" regime defined by VT≲V ≲VTln(1 /ɛ ) , where the bulk salt concentration is uniform; thus the R C timescale of the evolution of the current magnitude persists. However, nonlinear, voltage-dependent, capacitance of the double layer is responsible for a break in temporal antisymmetry of the charging and discharging currents. Third, the reverse peak in the discharging current develops in a "strongly nonlinear" regime V ≳VTln(1 /ɛ ) , driven by neutral salt adsorption into the double layers and consequent bulk depletion during charging. The strongly nonlinear regime features current evolution over three timescales. The current decays in magnitude on the double layer relaxation timescale, λD2/D ; then grows exponentially in time towards the reverse peak on the diffusion timescale, L2/D , indicating that the reverse peak is the results of fast diffusion of ions from the double layer layer to the bulk. Following the reverse peak, the current decays exponentially to zero on the R C timescale. Notably, the current at the reverse peak and the time of the reverse peak saturate at large voltages V ≫VTln(1 /ɛ ) . We provide semi-analytic expressions for the saturated reverse peak time and current, which can be used to infer charge carrier diffusivity and concentration from experiments.
Sodium 22+ washout from cultured rat cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kino, M.; Nakamura, A.; Hopp, L.
1986-10-01
The washout of Na/sup +/ isotopes from tissues and cells is quite complex and not well defined. To further gain insight into this process, we have studied /sup 22/Na/sup +/ washout from cultured Wistar rat skin fibroblasts and vascular smooth muscle cells (VSMCs). In these preparations, /sup 22/Na/sup +/ washout is described by a general three-exponential function. The exponential factor of the fastest component (k1) and the initial exchange rate constant (kie) of cultured fibroblasts decrease in magnitude in response to incubation in K+-deficient medium or in the presence of ouabain and increase in magnitude when the cells are incubatedmore » in a Ca++-deficient medium. As the magnitude of the kie declines (in the presence of ouabain) to the level of the exponential factor of the middle component (k2), /sup 22/Na/sup +/ washout is adequately described by a two-exponential function. When the kie is further diminished (in the presence of both ouabain and phloretin) to the range of the exponential factor of the slowest component (k3), the washout of /sup 22/Na/sup +/ is apparently monoexponential. Calculations of the cellular Na/sup +/ concentrations, based on the /sup 22/Na/sup +/ activity in the cells at the initiation of the washout experiments, and the medium specific activity agree with atomic absorption spectrometry measurements of the cellular concentration of this ion. Thus, all three components of /sup 22/Na/sup +/ washout from cultured rat cells are of cellular origin. Using the exponential parameters, compartmental analyses of two models (in parallel and in series) with three cellular Na/sup +/ pools were performed. The results indicate that, independent of the model chosen, the relative size of the largest Na+ pool is 92-93% in fibroblasts and approximately 96% in VSMCs. This pool is most likely to represent the cytosol.« less
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
NASA Astrophysics Data System (ADS)
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
Mathematical methods in biological dosimetry: the 1996 Iranian accident.
Voisin, P; Assaei, R G; Heidary, A; Varzegar, R; Zakeri, F; Durand, V; Sorokine-Durm, I
2000-11-01
To report 18 months of cytogenetic follow-up for an Iranian worker accidentally overexposed to 192Ir, the mathematical extrapolation and comparison with clinical data. Unstable chromosome aberrations were measured using conventional cytogenetic tests by French and Iranian biological dosimetry laboratories on five occasions after the exposure. The decrease in dicentrics over time was analysed mathematically. In addition, Dolphin and Qdr extrapolations were applied to the data to check the exposure estimates. FISH determination of translocation yields was performed twice by the French laboratory and the results compared with the Dolphin and Qdr corrected values. Dose estimates based on dicentrics decreased from 3.1 +/- 0.4 Gy at 5 days after the accident to 0.8 +/- 0.2 Gy at 529 days. This could be fitted by double-exponential regression with an inflexion point between rapid and slow decrease of dicentrics after about 40 days. Dose estimates of 3.4 +/- 0.4 Gy for the Qdr model and 3.6 +/- 0.5 Gy for the Dolphin model were calculated during the post-exposure period and were remarkably stable. FISH translocation data at 26 and 61 days appeared consistent with the Dolphin and Qdr estimates. Dose correction by the Qdr and Dolphin models and translocation scoring appeared consistent with the clinical data and provided better information about the radiation injury than did crude estimates from dicentric scoring alone. Estimation by the Dolphin model of the irradiated fraction of the body seemed unreliable: it correlated better with the fraction of originally irradiated lymphocytes.
Leachate flow around a well in MSW landfill: Analysis of field tests using Richards model.
Slimani, R; Oxarango, L; Sbartai, B; Tinet, A-J; Olivier, F; Dias, D
2017-05-01
During the lifespan of a Municipal Solid Waste landfill, its leachate drainage system may get clogged. Then, as a consequence of rainfall, leachate generation and possibly leachate injection, the moisture content in the landfill increases to the point that a leachate mound could be created. Therefore, pumping the leachate becomes a necessary solution. This paper presents an original analysis of leachate pumping and injection in an instrumented well. The water table level around the well is monitored by nine piezometers which allow the leachate flow behaviour to be captured. A numerical model based on Richards equation and an exponential relationship between saturated hydraulic conductivity and depth is used to analyze the landfill response to pumping and injection. Decreasing permeability with depth appears to have a major influence on the behaviour of the leachate flow. It could have a drastic negative impact on the pumping efficiency with a maximum quasi-stationary pumping rate limited to approximately 1m 3 /h for the tested well and the radius of influence is less than 20m. The numerical model provides a reasonable description of both pumping and injection tests. However, an anomalous behaviour observed at the transition between pumping and recovery phases is observed. This could be due to a limitation of the Richards model in that it neglects the gas phase behaviour and other double porosity heterogeneous effects. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
NASA Astrophysics Data System (ADS)
Kulakhmetov, Marat; Gallis, Michael; Alexeenko, Alina
2016-05-01
Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O2 + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociation exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500-20 000 K temperature range. The VRT model captures 80 × 106 state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000-15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.
Evidence for a scale-limited low-frequency earthquake source process
NASA Astrophysics Data System (ADS)
Chestler, S. R.; Creager, K. C.
2017-04-01
We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.
Water diffusion in silicate glasses: the effect of glass structure
NASA Astrophysics Data System (ADS)
Kuroda, M.; Tachibana, S.
2016-12-01
Water diffusion in silicate melts (glasses) is one of the main controlling factors of magmatism in a volcanic system. Water diffusivity in silicate glasses depends on its own concentration. However, the mechanism causing those dependences has not been fully understood yet. In order to construct a general model for water diffusion in various silicate glasses, we performed water diffusion experiments in silica glass and proposed a new water diffusion model [Kuroda et al., 2015]. In the model, water diffusivity is controlled by the concentration of both main diffusion species (i.e. molecular water) and diffusion pathways, which are determined by the concentrations of hydroxyl groups and network modifier cations. The model well explains the water diffusivity in various silicate glasses from silica glass to basalt glass. However, pre-exponential factors of water diffusivity in various glasses show five orders of magnitude variations although the pre-exponential factor should ideally represent the jump frequency and the jump distance of molecular water and show a much smaller variation. Here, we attribute the large variation of pre-exponential factors to a glass structure dependence of activation energy for molecular water diffusion. It has been known that the activation energy depends on the water concentration [Nowak and Behrens, 1997]. The concentration of hydroxyls, which cut Si-O-Si network in the glass structure, increases with water concentration, resulting in lowering the activation energy for water diffusion probably due to more fragmented structure. Network modifier cations are likely to play the same role as water. With taking the effect of glass structure into account, we found that the variation of pre-exponential factors of water diffusivity in silicate glasses can be much smaller than the five orders of magnitude, implying that the diffusion of molecular water in silicate glasses is controlled by the same atomic process.
Mathematical modeling of drying of pretreated and untreated pumpkin.
Tunde-Akintunde, T Y; Ogunlakin, G O
2013-08-01
In this study, drying characteristics of pretreated and untreated pumpkin were examined in a hot-air dryer at air temperatures within a range of 40-80 °C and a constant air velocity of 1.5 m/s. The drying was observed to be in the falling-rate drying period and thus liquid diffusion is the main mechanism of moisture movement from the internal regions to the product surface. The experimental drying data for the pumpkin fruits were used to fit Exponential, General exponential, Logarithmic, Page, Midilli-Kucuk and Parabolic model and the statistical validity of models tested were determined by non-linear regression analysis. The Parabolic model had the highest R(2) and lowest χ(2) and RMSE values. This indicates that the Parabolic model is appropriate to describe the dehydration behavior for the pumpkin.
Cosmological models constructed by van der Waals fluid approximation and volumetric expansion
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Myrzakulov, R.
The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.
Ghatage, Dhairyasheel; Chatterji, Apratim
2013-10-01
We introduce a method to obtain steady-state uniaxial exponential-stretching flow of a fluid (akin to extensional flow) in the incompressible limit, which enables us to study the response of suspended macromolecules to the flow by computer simulations. The flow field in this flow is defined by v(x) = εx, where v(x) is the velocity of the fluid and ε is the stretch flow gradient. To eliminate the effect of confining boundaries, we produce the flow in a channel of uniform square cross section with periodic boundary conditions in directions perpendicular to the flow, but simultaneously maintain uniform density of fluid along the length of the tube. In experiments a perfect elongational flow is obtained only along the axis of symmetry in a four-roll geometry or a filament-stretching rheometer. We can reproduce flow conditions very similar to extensional flow near the axis of symmetry by exponential-stretching flow; we do this by adding the right amounts of fluid along the length of the flow in our simulations. The fluid particles added along the length of the tube are the same fluid particles which exit the channel due to the flow; thus mass conservation is maintained in our model by default. We also suggest a scheme for possible realization of exponential-stretching flow in experiments. To establish our method as a useful tool to study various soft matter systems in extensional flow, we embed (i) spherical colloids with excluded volume interactions (modeled by the Weeks-Chandler potential) as well as (ii) a bead-spring model of star polymers in the fluid to study their responses to the exponential-stretched flow and show that the responses of macromolecules in the two flows are very similar. We demonstrate that the variation of number density of the suspended colloids along the direction of flow is in tune with our expectations. We also conclude from our study of the deformation of star polymers with different numbers of arms f that the critical flow gradient ε(c) at which the star undergoes the coil-to-stretch transition is independent of f for f = 2,5,10, and 20.
Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts
NASA Technical Reports Server (NTRS)
Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.
2007-01-01
We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Quantum Loop Expansion to High Orders, Extended Borel Summation, and Comparison with Exact Results
NASA Astrophysics Data System (ADS)
Noreen, Amna; Olaussen, Kåre
2013-07-01
We compare predictions of the quantum loop expansion to (essentially) infinite orders with (essentially) exact results in a simple quantum mechanical model. We find that there are exponentially small corrections to the loop expansion, which cannot be explained by any obvious “instanton”-type corrections. It is not the mathematical occurrence of exponential corrections but their seeming lack of any physical origin which we find surprising and puzzling.
Non-exponential kinetics of unfolding under a constant force.
Bell, Samuel; Terentjev, Eugene M
2016-11-14
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
Non-exponential kinetics of unfolding under a constant force
NASA Astrophysics Data System (ADS)
Bell, Samuel; Terentjev, Eugene M.
2016-11-01
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
NASA Technical Reports Server (NTRS)
Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina
2010-01-01
Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
A Parametric Study of Fine-scale Turbulence Mixing Noise
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James; Freund, Jonathan B.
2002-01-01
The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.
The use of models by ecologist and environmental managers, to inform environmental management and decision-making, has grown exponentially in the past 50 years. Due to logistical, economical and theoretical benefits, model users are frequently transferring preexisting models to n...
Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S
2014-11-01
Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than 15-20%. Histological fat fraction measurements in ten patients were highly correlated with calibrated MRI fat fraction measurements (Pearson correlation coefficient r = 0.90 with P = 0.0004). Liver MRI using multi-point Dixon with multi-fat-peak and bi-exponential T2* modeling provided accurate fat quantification in children and young adults with non-alcoholic fatty liver disease and may be used to screen at-risk or affected individuals and to monitor disease progress noninvasively.
Instantaneous and dynamical decoherence
NASA Astrophysics Data System (ADS)
Polonyi, Janos
2018-04-01
Two manifestations of decoherence, called instantaneous and dynamical, are investigated. The former reflects the suppression of the interference between the components of the current state while the latter reflects that within the initial state. These types of decoherence are computed in the case of the Brownian motion and the harmonic and anharmonic oscillators within the semiclassical approximation. A remarkable phenomenon, namely the opposite orientation of the time arrow of the dynamical variables compared to that of the quantum fluctuations generates a double exponential time dependence of the dynamical decoherence in the presence of a harmonic force. For the weakly anharmonic oscillator the dynamical decoherence is found to depend in a singular way on the amount of the anharmonicity.
NASA Astrophysics Data System (ADS)
Baldovin, F.; Robledo, A.
2002-10-01
We uncover the dynamics at the chaos threshold μ∞ of the logistic map and find that it consists of trajectories made of intertwined power laws that reproduce the entire period-doubling cascade that occurs for μ<μ∞. We corroborate this structure analytically via the Feigenbaum renormalization-group (RG) transformation and find that the sensitivity to initial conditions has precisely the form of a q exponential, of which we determine the q index and the q-generalized Lyapunov coefficient λq. Our results are an unequivocal validation of the applicability of the nonextensive generalization of Boltzmann-Gibbs statistical mechanics to critical points of nonlinear maps.
Ouyang, Wenjun; Subotnik, Joseph E
2017-05-07
Using the Anderson-Holstein model, we investigate charge transfer dynamics between a molecule and a metal surface for two extreme cases. (i) With a large barrier, we show that the dynamics follow a single exponential decay as expected; (ii) without any barrier, we show that the dynamics are more complicated. On the one hand, if the metal-molecule coupling is small, single exponential dynamics persist. On the other hand, when the coupling between the metal and the molecule is large, the dynamics follow a biexponential decay. We analyze the dynamics using the Smoluchowski equation, develop a simple model, and explore the consequences of biexponential dynamics for a hypothetical cyclic voltammetry experiment.
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
NASA Astrophysics Data System (ADS)
Schneider, Markus P. A.
This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.
Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain
2012-01-01
Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053
Effective equilibrium picture in the x y model with exponentially correlated noise
NASA Astrophysics Data System (ADS)
Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio
2018-02-01
We study the effect of exponentially correlated noise on the x y model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ , indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.
Ultra-large distance modification of gravity from Lorentz symmetry breaking at the Planck scale
NASA Astrophysics Data System (ADS)
Gorbunov, Dmitry S.; Sibiryakov, Sergei M.
2005-09-01
We present an extension of the Randall-Sundrum model in which, due to spontaneous Lorentz symmetry breaking, graviton mixes with bulk vector fields and becomes quasilocalized. The masses of KK modes comprising the four-dimensional graviton are naturally exponentially small. This allows to push the Lorentz breaking scale to as high as a few tenth of the Planck mass. The model does not contain ghosts or tachyons and does not exhibit the van Dam-Veltman-Zakharov discontinuity. The gravitational attraction between static point masses becomes gradually weaker with increasing of separation and gets replaced by repulsion (antigravity) at exponentially large distances.
Effective equilibrium picture in the xy model with exponentially correlated noise.
Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio
2018-02-01
We study the effect of exponentially correlated noise on the xy model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ, indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.
Proportional Feedback Control of Energy Intake During Obesity Pharmacotherapy.
Hall, Kevin D; Sanghvi, Arjun; Göbel, Britta
2017-12-01
Obesity pharmacotherapies result in an exponential time course for energy intake whereby large early decreases dissipate over time. This pattern of declining drug efficacy to decrease energy intake results in a weight loss plateau within approximately 1 year. This study aimed to elucidate the physiology underlying the exponential decay of drug effects on energy intake. Placebo-subtracted energy intake time courses were examined during long-term obesity pharmacotherapy trials for 14 different drugs or drug combinations within the theoretical framework of a proportional feedback control system regulating human body weight. Assuming each obesity drug had a relatively constant effect on average energy intake and did not affect other model parameters, our model correctly predicted that long-term placebo-subtracted energy intake was linearly related to early reductions in energy intake according to a prespecified equation with no free parameters. The simple model explained about 70% of the variance between drug studies with respect to the long-term effects on energy intake, although a significant proportional bias was evident. The exponential decay over time of obesity pharmacotherapies to suppress energy intake can be interpreted as a relatively constant effect of each drug superimposed on a physiological feedback control system regulating body weight. © 2017 The Obesity Society.
The Mass-dependent Star Formation Histories of Disk Galaxies: Infall Model Versus Observations
NASA Astrophysics Data System (ADS)
Chang, R. X.; Hou, J. L.; Shen, S. Y.; Shu, C. G.
2010-10-01
We introduce a simple model to explore the star formation histories of disk galaxies. We assume that the disk originate and grows by continuous gas infall. The gas infall rate is parameterized by the Gaussian formula with one free parameter: the infall-peak time tp . The Kennicutt star formation law is adopted to describe how much cold gas turns into stars. The gas outflow process is also considered in our model. We find that, at a given galactic stellar mass M *, the model adopting a late infall-peak time tp results in blue colors, low-metallicity, high specific star formation rate (SFR), and high gas fraction, while the gas outflow rate mainly influences the gas-phase metallicity and star formation efficiency mainly influences the gas fraction. Motivated by the local observed scaling relations, we "construct" a mass-dependent model by assuming that the low-mass galaxy has a later infall-peak time tp and a larger gas outflow rate than massive systems. It is shown that this model can be in agreement with not only the local observations, but also with the observed correlations between specific SFR and galactic stellar mass SFR/M * ~ M * at intermediate redshifts z < 1. Comparison between the Gaussian-infall model and the exponential-infall model is also presented. It shows that the exponential-infall model predicts a higher SFR at early stage and a lower SFR later than that of Gaussian infall. Our results suggest that the Gaussian infall rate may be more reasonable in describing the gas cooling process than the exponential infall rate, especially for low-mass systems.
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
On the nature of dissipative Timoshenko systems at light of the second spectrum of frequency
NASA Astrophysics Data System (ADS)
Almeida Júnior, D. S.; Ramos, A. J. A.
2017-12-01
In the present work, we prove that there exists a relation between a physical inconsistence known as second spectrum of frequency or non-physical spectrum and the exponential decay of a dissipative Timoshenko system where the damping mechanism acts on angle rotation. The so-called second spectrum is addressed into stabilization scenario and, in particular, we show that the second spectrum of the classical Timoshenko model can be truncated by taking a damping mechanism. Also, we show that dissipative Timoshenko type systems which are free of the second spectrum [based on important physical and historical observations made by Elishakoff (Advances mathematical modeling and experimental methods for materials and structures, solid mechanics and its applications, Springer, Berlin, pp 249-254, 2010), Elishakoff et al. (ASME Am Soc Mech Eng Appl Mech Rev 67(6):1-11 2015) and Elishakoff et al. (Int J Solids Struct 109:143-151, 2017)] are exponential stable for any values of the coefficients of system. In this direction, we provide physical explanations why weakly dissipative Timoshenko systems decay exponentially according to equality between velocity of wave propagation as proved in pioneering works by Soufyane (C R Acad Sci 328(8):731-734, 1999) and also by Muñoz Rivera and Racke (Discrete Contin Dyn Syst B 9:1625-1639, 2003). Therefore, the second spectrum of the classical Timoshenko beam model plays an important role in explaining some results on exponential decay and our investigations suggest to pay attention to the eventual consequences of this spectrum on stabilization setting for dissipative Timoshenko type systems.
Speranza, B; Bevilacqua, A; Mastromatteo, M; Sinigaglia, M; Corbo, M R
2010-08-01
The objective of the current study was to examine the interactions between Pseudomonas putida and Escherichia coli O157:H7 in coculture studies on fish-burgers packed in air and under different modified atmospheres (30 : 40 : 30 O(2) : CO(2) : N(2), 5 : 95 O(2) : CO(2) and 50 : 50 O(2) : CO(2)), throughout the storage at 8 degrees C. The lag-exponential model was applied to describe the microbial growth. To give a quantitative measure of the occurring microbial interactions, two simple parameters were developed: the combined interaction index (CII) and the partial interaction index (PII). Under air, the interaction was significant (P < 0.05) only within the exponential growth phase (CII, 1.72), whereas under the modified atmospheres, the interactions were highly significant (P < 0.001) and occurred both in the exponential and in the stationary phase (CII ranged from 0.33 to 1.18). PII values for E. coli O157:H7 were lower than those calculated for Ps. putida. The interactions occurring into the system affected both E. coli O157:H7 and pseudomonads subpopulations. The packaging atmosphere resulted in a key element. The article provides some useful information on the interactions occurring between E. coli O157:H7 and Ps. putida on fish-burgers. The proposed index describes successfully the competitive growth of both micro-organisms, giving also a quantitative measure of a qualitative phenomenon.
NASA Astrophysics Data System (ADS)
Brown, J. S.; Shaheen, S. E.
2018-04-01
Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.
Brown, J S; Shaheen, S E
2018-04-04
Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.
Wang, Gang; Yuan, Jianli; Wang, Xizhi; Xiao, Sa; Huang, Wenbing
2004-11-01
Taking into account the individual growth form (allometry) in a plant population and the effects of intraspecific competition on allometry under the population self-thinning condition, and adopting Ogawa's allometric equation 1/y = 1/axb + 1/c as the expression of complex allometry, the generalized model describing the change mode of r (the self-thinning exponential in the self-thinning equation, log M = K + log N, where M is mean plant mass, K is constant, and N is population density) was constructed. Meanwhile, with reference to the changing process of population density to survival curve type B, the exponential, r, was calculated using the software MATHEMATICA 4.0. The results of the numerical simulation show that (1) the value of the self-thinning exponential, r, is mainly determined by allometric parameters; it is most sensitive to change of b of the three allometric parameters, and a and c take second place; (2) the exponential, r, changes continuously from about -3 to the asymptote -1; the slope of -3/2 is a transient value in the population self-thinning process; (3) it is not a 'law' that the slope of the self-thinning trajectory equals or approaches -3/2, and the long-running dispute in ecological research over whether or not the exponential, r, equals -3/2 is meaningless. So future studies on the plant self-thinning process should focus on investigating how plant neighbor competition affects the phenotypic plasticity of plant individuals, what the relationship between the allometry mode and the self-thinning trajectory of plant population is and, in the light of evolution, how plants have adapted to competition pressure by plastic individual growth.
Feasibility of quasi-random band model in evaluating atmospheric radiance
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Mirakhur, N.
1980-01-01
The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.
Parameterization guidelines and considerations for hydrologic models
R. W. Malone; G. Yagow; C. Baffaut; M.W Gitau; Z. Qi; Devendra Amatya; P.B. Parajuli; J.V. Bonta; T.R. Green
2015-01-01
 Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...
Cellular automata model for use with real freeway data
DOT National Transportation Integrated Search
2002-01-01
The exponential rate of increase in freeway traffic is expanding the need for accurate and : realistic methods to model and predict traffic flow. Traffic modeling and simulation facilitates an : examination of both microscopic and macroscopic views o...