A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
Quark mixing and exponential form of the Cabibbo-Kobayashi-Maskawa matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukovsky, K. V., E-mail: zhukovsk@phys.msu.ru; Dattoli, D., E-mail: dattoli@frascati.enea.i
2008-10-15
Various forms of representation of the mixing matrix are discussed. An exponential parametrization e{sup A} of the Cabibbo-Kobayashi-Maskawa matrix is considered in the context of the unitarity requirement, this parametrization being the most general form of the mixing matrix. An explicit representation for the exponential mixing matrix in terms of the first and second degrees of the matrix A exclusively is obtained. This representation makes it possible to calculate this exponential mixing matrix readily in any order of the expansion in the small parameter {lambda}. The generation of new unitary parametric representations of the mixing matrix with the aid ofmore » the exponential matrix is demonstrated.« less
NASA Astrophysics Data System (ADS)
Allen, Linda J. S.
2016-09-01
Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,
NASA Astrophysics Data System (ADS)
Solomon, D. Kip; Genereux, David P.; Plummer, L. Niel; Busenberg, Eurybiades
2010-04-01
We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC-11, CFC-12, CFC-113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near-zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.
NASA Astrophysics Data System (ADS)
Zhukovsky, K. V.
2017-09-01
The exponential form of the Pontecorvo-Maki-Nakagawa-Sakata mixing matrix for neutrinos is considered in the context of the fundamental representation of the SU(3) group. The logarithm of the mixing matrix is obtained. Based on the most recent experimental data on neutrino mixing, the exact values of the entries of the exponential matrix are calculated. The exact values for its real and imaginary parts are determined, respectively, in charge of the mixing without CP violation and of the pure CP violation effect. The hypothesis of complementarity for quarks and neutrinos is confirmed. The factorization of the exponential mixing matrix, which allows the separation of the mixing and of the CP violation itself in the form of the product of rotations around the real and imaginary axes, is demonstrated.
Solomon, D. Kip; Genereux, David P.; Plummer, Niel; Busenberg, Eurybiades
2010-01-01
We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC‐11, CFC‐12, CFC‐113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near‐zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.
NASA Astrophysics Data System (ADS)
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
Fermion masses and mixing in general warped extra dimensional models
NASA Astrophysics Data System (ADS)
Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel
2015-06-01
We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.
Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.
2007-01-01
Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.
Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian
2014-12-01
We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spectral Gap Estimates in Mean Field Spin Glasses
NASA Astrophysics Data System (ADS)
Ben Arous, Gérard; Jagannath, Aukosh
2018-05-01
We show that mixing for local, reversible dynamics of mean field spin glasses is exponentially slow in the low temperature regime. We introduce a notion of free energy barriers for the overlap, and prove that their existence imply that the spectral gap is exponentially small, and thus that mixing is exponentially slow. We then exhibit sufficient conditions on the equilibrium Gibbs measure which guarantee the existence of these barriers, using the notion of replicon eigenvalue and 2D Guerra Talagrand bounds. We show how these sufficient conditions cover large classes of Ising spin models for reversible nearest-neighbor dynamics and spherical models for Langevin dynamics. Finally, in the case of Ising spins, Panchenko's recent rigorous calculation (Panchenko in Ann Probab 46(2):865-896, 2018) of the free energy for a system of "two real replica" enables us to prove a quenched LDP for the overlap distribution, which gives us a wider criterion for slow mixing directly related to the Franz-Parisi-Virasoro approach (Franz et al. in J Phys I 2(10):1869-1880, 1992; Kurchan et al. J Phys I 3(8):1819-1838, 1993). This condition holds in a wider range of temperatures.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Compatible estimators of the components of change for a rotating panel forest inventory design
Francis A. Roesch
2007-01-01
This article presents two approaches for estimating the components of forest change utilizing data from a rotating panel sample design. One approach uses a variant of the exponentially weighted moving average estimator and the other approach uses mixed estimation. Three general transition models were each combined with a single compatibility model for the mixed...
Improving deep convolutional neural networks with mixed maxout units.
Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.
NASA Astrophysics Data System (ADS)
Merler, Stefano
2016-09-01
Characterizing the early growth profile of an epidemic outbreak is key for predicting the likely trajectory of the number of cases and for designing adequate control measures. Epidemic profiles characterized by exponential growth have been widely observed in the past and a grounding theoretical framework for the analysis of infectious disease dynamics was provided by the pioneering work of Kermack and McKendrick [1]. In particular, exponential growth stems from the assumption that pathogens spread in homogeneous mixing populations; that is, individuals of the population mix uniformly and randomly with each other. However, this assumption was readily recognized as highly questionable [2], and sub-exponential profiles of epidemic growth have been observed in a number of epidemic outbreaks, including HIV/AIDS, foot-and-mouth disease, measles and, more recently, Ebola [3,4].
Improving deep convolutional neural networks with mixed maxout units
Liu, Fu-xian; Li, Long-yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
Huang, Haiying; Du, Qiaosheng; Kang, Xibing
2013-11-01
In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.
NASA Astrophysics Data System (ADS)
Yuan, Manman; Wang, Weiping; Luo, Xiong; Li, Lixiang; Kurths, Jürgen; Wang, Xiao
2018-03-01
This paper is concerned with the exponential lag function projective synchronization of memristive multidirectional associative memory neural networks (MMAMNNs). First, we propose a new model of MMAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying discrete delays and distributed time delays. Second, we design two kinds of hybrid controllers. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the controllers are carefully designed to confirm the process of different types of synchronization in the MMAMNNs. Third, sufficient criteria guaranteeing the synchronization of system are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.
Exponential Mixing of the 3D Stochastic Navier-Stokes Equations Driven by Mildly Degenerate Noises
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albeverio, Sergio; Debussche, Arnaud, E-mail: arnaud.debussche@bretagne.ens-cachan.fr; Xu Lihu, E-mail: Lihu.Xu@brunel.ac.uk
2012-10-15
We prove the strong Feller property and exponential mixing for 3D stochastic Navier-Stokes equation driven by mildly degenerate noises (i.e. all but finitely many Fourier modes being forced) via a Kolmogorov equation approach.
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
Ultra-large distance modification of gravity from Lorentz symmetry breaking at the Planck scale
NASA Astrophysics Data System (ADS)
Gorbunov, Dmitry S.; Sibiryakov, Sergei M.
2005-09-01
We present an extension of the Randall-Sundrum model in which, due to spontaneous Lorentz symmetry breaking, graviton mixes with bulk vector fields and becomes quasilocalized. The masses of KK modes comprising the four-dimensional graviton are naturally exponentially small. This allows to push the Lorentz breaking scale to as high as a few tenth of the Planck mass. The model does not contain ghosts or tachyons and does not exhibit the van Dam-Veltman-Zakharov discontinuity. The gravitational attraction between static point masses becomes gradually weaker with increasing of separation and gets replaced by repulsion (antigravity) at exponentially large distances.
MIP models for connected facility location: A theoretical and computational study☆
Gollowitzer, Stefan; Ljubić, Ivana
2011-01-01
This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366
Anomalous Diffusion in a Trading Model
NASA Astrophysics Data System (ADS)
Khidzir, Sidiq Mohamad; Wan Abdullah, Wan Ahmad Tajuddin
2009-07-01
The result of the trading model by Chakrabarti et al. [1] is the wealth distribution with a mixed exponential and power law distribution. Based on the motivation of studying the dynamics behind the flow of money similar to work done by Brockmann [2, 3] we track the flow of money in this trading model to observe anomalous diffusion in the form of long waiting times and Levy Flights.
Zuthi, Mst Fazana Rahman; Guo, Wenshan; Ngo, Huu Hao; Nghiem, Duc Long; Hai, Faisal I; Xia, Siqing; Li, Jianxin; Li, Jixiang; Liu, Yi
2017-08-01
This study aimed to develop a practical semi-empirical mathematical model of membrane fouling that accounts for cake formation on the membrane and its pore blocking as the major processes of membrane fouling. In the developed model, the concentration of mixed liquor suspended solid is used as a lumped parameter to describe the formation of cake layer including the biofilm. The new model considers the combined effect of aeration and backwash on the foulants' detachment from the membrane. New exponential coefficients are also included in the model to describe the exponential increase of transmembrane pressure that typically occurs after the initial stage of an MBR operation. The model was validated using experimental data obtained from a lab-scale aerobic sponge-submerged membrane bioreactor (MBR), and the simulation of the model agreed well with the experimental findings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick
2012-01-01
Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.
Mathematical models to characterize early epidemic growth: A Review
Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile
2016-01-01
There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-15 Ebola epidemic in West Africa. PMID:27451336
Mathematical models to characterize early epidemic growth: A review
NASA Astrophysics Data System (ADS)
Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile
2016-09-01
There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-2015 Ebola epidemic in West Africa.
Effects of turbulent hyporheic mixing on reach-scale solute transport
NASA Astrophysics Data System (ADS)
Roche, K. R.; Li, A.; Packman, A. I.
2017-12-01
Turbulence rapidly mixes solutes and fine particles into coarse-grained streambeds. Both hyporheic exchange rates and spatial variability of hyporheic mixing are known to be controlled by turbulence, but it is unclear how turbulent mixing influences mass transport at the scale of stream reaches. We used a process-based particle-tracking model to simulate local- and reach-scale solute transport for a coarse-bed stream. Two vertical mixing profiles, one with a smooth transition from in-stream to hyporheic transport conditions and a second with enhanced turbulent transport at the sediment-water interface, were fit to steady-state subsurface concentration profiles observed in laboratory experiments. The mixing profile with enhanced interfacial transport better matched the observed concentration profiles and overall mass retention in the streambed. The best-fit mixing profiles were then used to simulate upscaled solute transport in a stream. Enhanced mixing coupled in-stream and hyporheic solute transport, causing solutes exchanged into the shallow subsurface to have travel times similar to the water column. This extended the exponential region of the in-stream solute breakthrough curve, and delayed the onset of the heavy power-law tailing induced by deeper and slower hyporheic porewater velocities. Slopes of observed power-law tails were greater than those predicted from stochastic transport theory, and also changed in time. In addition, rapid hyporheic transport velocities truncated the hyporheic residence time distribution by causing mass to exit the stream reach via subsurface advection, yielding strong exponential tempering in the in-stream breakthrough curves at the timescale of advective hyporheic transport through the reach. These results show that strong turbulent mixing across the sediment-water interface violates the conventional separation of surface and subsurface flows used in current models for solute transport in rivers. Instead, the full distribution of flow and mixing over the surface-subsurface continuum must be explicitly considered to properly interpret solute transport in coarse-bed streams.
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
Gabriel, Jan; Petrov, Oleg V; Kim, Youngsik; Martin, Steve W; Vogel, Michael
2015-09-01
We use (7)Li NMR to study the ionic jump motion in ternary 0.5Li2S+0.5[(1-x)GeS2+xGeO2] glassy lithium ion conductors. Exploring the "mixed glass former effect" in this system led to the assumption of a homogeneous and random variation of diffusion barriers in this system. We exploit that combining traditional line-shape analysis with novel field-cycling relaxometry, it is possible to measure the spectral density of the ionic jump motion in broad frequency and temperature ranges and, thus, to determine the distribution of activation energies. Two models are employed to parameterize the (7)Li NMR data, namely, the multi-exponential autocorrelation function model and the power-law waiting times model. Careful evaluation of both of these models indicates a broadly inhomogeneous energy landscape for both the single (x=0.0) and the mixed (x=0.1) network former glasses. The multi-exponential autocorrelation function model can be well described by a Gaussian distribution of activation barriers. Applicability of the methods used and their sensitivity to microscopic details of ionic motion are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
In College and in Recovery: Reasons for Joining a Collegiate Recovery Program
ERIC Educational Resources Information Center
Laudet, Alexandre B.; Harris, Kitty; Kimball, Thomas; Winters, Ken C.; Moberg, D. Paul
2016-01-01
Objective: Collegiate Recovery Programs (CRPs), a campus-based peer support model for students recovering from substance abuse problems, grew exponentially in the past decade, yet remain unexplored. Methods: This mixed-methods study examines students' reasons for CRP enrollment to guide academic institutions and referral sources. Students (N =…
Transient modeling in simulation of hospital operations for emergency response.
Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li
2006-01-01
Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.
A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction
NASA Technical Reports Server (NTRS)
Smith, Samantha A.; DelGenio, Anthony D.
1998-01-01
A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard; Attele, Rohan; Koshak, William
2011-01-01
A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.
NASA Astrophysics Data System (ADS)
Livorati, André L. P.; Palmero, Matheus S.; Díaz-I, Gabriel; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.
2018-02-01
We study the dynamics of an ensemble of non interacting particles constrained by two infinitely heavy walls, where one of them is moving periodically in time, while the other is fixed. The system presents mixed dynamics, where the accessible region for the particle to diffuse chaotically is bordered by an invariant spanning curve. Statistical analysis for the root mean square velocity, considering high and low velocity ensembles, leads the dynamics to the same steady state plateau for long times. A transport investigation of the dynamics via escape basins reveals that depending of the initial velocity ensemble, the decay rates of the survival probability present different shapes and bumps, in a mix of exponential, power law and stretched exponential decays. After an analysis of step-size averages, we found that the stable manifolds play the role of a preferential path for faster escape, being responsible for the bumps and different shapes of the survival probability.
NASA Astrophysics Data System (ADS)
Shateyi, Stanford; Marewo, Gerald T.
2018-05-01
We numerically investigate a mixed convection model for a magnetohydrodynamic (MHD) Jeffery fluid flowing over an exponentially stretching sheet. The influence of thermal radiation and chemical reaction is also considered in this study. The governing non-linear coupled partial differential equations are reduced to a set of coupled non-linear ordinary differential equations by using similarity functions. This new set of ordinary differential equations are solved numerically using the Spectral Quasi-Linearization Method. A parametric study of physical parameters involved in this study is carried out and displayed in tabular and graphical forms. It is observed that the velocity is enhanced with increasing values of the Deborah number, buoyancy and thermal radiation parameters. Furthermore, the temperature and species concentration are decreasing functions of the Deborah number. The skin friction coefficient increases with increasing values of the magnetic parameter and relaxation time. Heat and mass transfer rates increase with increasing values of the Deborah number and buoyancy parameters.
ERIC Educational Resources Information Center
Baker-Doyle, Kira J.
2015-01-01
Social network research on teachers and schools has risen exponentially in recent years as an innovative method to reveal the role of social networks in education. However, scholars are still exploring ways to incorporate traditional quantitative methods of Social Network Analysis (SNA) with qualitative approaches to social network research. This…
A Parametric Study of Fine-scale Turbulence Mixing Noise
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James; Freund, Jonathan B.
2002-01-01
The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2012-01-01
The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.
Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M
2014-05-01
Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
King, Sun-Kun
1996-01-01
The variances of the quantum-mechanical noise in a two-input-port Michelson interferometer within the framework of the Loudon-Ni model were solved exactly in two general cases: (1) one coherent state input and one squeezed state input, and (2) two photon number states inputs. Low intensity limit, exponential decaying signal and the noise due to mixing were discussed briefly.
Genetic demixing and evolution in linear stepping stone models
NASA Astrophysics Data System (ADS)
Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.
2010-04-01
Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial experiments on range expansions of inoculations of Escherichia coli and Saccharomyces cerevisiae.
Long time stability of small-amplitude Breathers in a mixed FPU-KG model
NASA Astrophysics Data System (ADS)
Paleari, Simone; Penati, Tiziano
2016-12-01
In the limit of small couplings in the nearest neighbor interaction, and small total energy, we apply the resonant normal form result of a previous paper of ours to a finite but arbitrarily large mixed Fermi-Pasta-Ulam Klein-Gordon chain, i.e., with both linear and nonlinear terms in both the on-site and interaction potential, with periodic boundary conditions. An existence and orbital stability result for Breathers of such a normal form, which turns out to be a generalized discrete nonlinear Schrödinger model with exponentially decaying all neighbor interactions, is first proved. Exploiting such a result as an intermediate step, a long time stability theorem for the true Breathers of the KG and FPU-KG models, in the anti-continuous limit, is proven.
Apparent power-law distributions in animal movements can arise from intraspecific interactions
Breed, Greg A.; Severns, Paul M.; Edwards, Andrew M.
2015-01-01
Lévy flights have gained prominence for analysis of animal movement. In a Lévy flight, step-lengths are drawn from a heavy-tailed distribution such as a power law (PL), and a large number of empirical demonstrations have been published. Others, however, have suggested that animal movement is ill fit by PL distributions or contend a state-switching process better explains apparent Lévy flight movement patterns. We used a mix of direct behavioural observations and GPS tracking to understand step-length patterns in females of two related butterflies. We initially found movement in one species (Euphydryas editha taylori) was best fit by a bounded PL, evidence of a Lévy flight, while the other (Euphydryas phaeton) was best fit by an exponential distribution. Subsequent analyses introduced additional candidate models and used behavioural observations to sort steps based on intraspecific interactions (interactions were rare in E. phaeton but common in E. e. taylori). These analyses showed a mixed-exponential is favoured over the bounded PL for E. e. taylori and that when step-lengths were sorted into states based on the influence of harassing conspecific males, both states were best fit by simple exponential distributions. The direct behavioural observations allowed us to infer the underlying behavioural mechanism is a state-switching process driven by intraspecific interactions rather than a Lévy flight. PMID:25519992
NASA Astrophysics Data System (ADS)
Bakhshi Khaniki, Hossein; Rajasekaran, Sundaramoorthy
2018-05-01
This study develops a comprehensive investigation on mechanical behavior of non-uniform bi-directional functionally graded beam sensors in the framework of modified couple stress theory. Material variation is modelled through both length and thickness directions using power-law, sigmoid and exponential functions. Moreover, beam is assumed with linear, exponential and parabolic cross-section variation through the length using power-law and sigmoid varying functions. Using these assumptions, a general model for microbeams is presented and formulated by employing Hamilton’s principle. Governing equations are solved using a mixed finite element method with Lagrangian interpolation technique, Gaussian quadrature method and Wilson’s Lagrangian multiplier method. It is shown that by using bi-directional functionally graded materials in nonuniform microbeams, mechanical behavior of such structures could be affected noticeably and scale parameter has a significant effect in changing the rigidity of nonuniform bi-directional functionally graded beams.
Modeling of mixing processes: Fluids, particulates, and powders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ottino, J.M.; Hansen, S.
Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is foundmore » that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.« less
In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.
Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A
2018-04-01
Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.
Human mobility in space from three modes of public transportation
NASA Astrophysics Data System (ADS)
Jiang, Shixiong; Guan, Wei; Zhang, Wenyi; Chen, Xu; Yang, Liu
2017-10-01
The human mobility patterns have drew much attention from researchers for decades, considering about its importance for urban planning and traffic management. In this study, the taxi GPS trajectories, smart card transaction data of subway and bus from Beijing are utilized to model human mobility in space. The original datasets are cleaned and processed to attain the displacement of each trip according to the origin and destination locations. Then, the Akaike information criterion is adopted to screen out the best fitting distribution for each mode from candidate ones. The results indicate that displacements of taxi trips follow the exponential distribution. Besides, the exponential distribution also fits displacements of bus trips well. However, their exponents are significantly different. Displacements of subway trips show great specialties and can be well fitted by the gamma distribution. It is obvious that human mobility of each mode is different. To explore the overall human mobility, the three datasets are mixed up to form a fusion dataset according to the annual ridership proportions. Finally, the fusion displacements follow the power-law distribution with an exponential cutoff. It is innovative to combine different transportation modes to model human mobility in the city.
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Decay of random correlation functions for unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique
2000-10-01
Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.
NASA Astrophysics Data System (ADS)
Isa, Siti Suzilliana Putri Mohamed; Arifin, Norihan Md.; Nazar, Roslinda; Bachok, Norfifah; Ali, Fadzilah Md
2017-12-01
A theoretical study that describes the magnetohydrodynamic mixed convection boundary layer flow with heat transfer over an exponentially stretching sheet with an exponential temperature distribution has been presented herein. This study is conducted in the presence of convective heat exchange at the surface and its surroundings. The system is controlled by viscous dissipation and internal heat generation effects. The governing nonlinear partial differential equations are converted into ordinary differential equations by a similarity transformation. The converted equations are then solved numerically using the shooting method. The results related to skin friction coefficient, local Nusselt number, velocity and temperature profiles are presented for several sets of values of the parameters. The effects of the governing parameters on the features of the flow and heat transfer are examined in detail in this study.
A mathematical model for generating bipartite graphs and its application to protein networks
NASA Astrophysics Data System (ADS)
Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.
2009-12-01
Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.
MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.
Hedeker, D; Gibbons, R D
1996-05-01
MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.
NASA Technical Reports Server (NTRS)
Valisetty, R. R.; Chamis, C. C.
1987-01-01
A computer code is presented for the sublaminate/ply level analysis of composite structures. This code is useful for obtaining stresses in regions affected by delaminations, transverse cracks, and discontinuities related to inherent fabrication anomalies, geometric configurations, and loading conditions. Particular attention is focussed on those layers or groups of layers (sublaminates) which are immediately affected by the inherent flaws. These layers are analyzed as homogeneous bodies in equilibrium and in isolation from the rest of the laminate. The theoretical model used to analyze the individual layers allows the relevant stresses and displacements near discontinuities to be represented in the form of pure exponential-decay-type functions which are selected to eliminate the exponential-precision-related difficulties in sublaminate/ply level analysis. Thus, sublaminate analysis can be conducted without any restriction on the maximum number of layers, delaminations, transverse cracks, or other types of discontinuities. In conjunction with the strain energy release rate (SERR) concept and composite micromechanics, this computational procedure is used to model select cases of end-notch and mixed-mode fracture specimens. The computed stresses are in good agreement with those from a three-dimensional finite element analysis. Also, SERRs compare well with limited available experimental data.
Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar
2005-11-04
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.
Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier
2016-01-01
Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.
Towards enhancing and delaying disturbances in free shear flows
NASA Technical Reports Server (NTRS)
Criminale, W. O.; Jackson, T. L.; Lasseigne, D. G.
1994-01-01
The family of shear flows comprising the jet, wake, and the mixing layer are subjected to perturbations in an inviscid incompressible fluid. By modeling the basic mean flows as parallel with piecewise linear variations for the velocities, complete and general solutions to the linearized equations of motion can be obtained in closed form as functions of all space variables and time when posed as an initial value problem. The results show that there is a continuous as well as the discrete spectrum that is more familiar in stability theory and therefore there can be both algebraic and exponential growth of disturbances in time. These bases make it feasible to consider control of such flows. To this end, the possibility of enhancing the disturbances in the mixing layer and delaying the onset in the jet and wake is investigated. It is found that growth of perturbations can be delayed to a considerable degree for the jet and the wake but, by comparison, cannot be enhanced in the mixing layer. By using moving coordinates, a method for demonstrating the predominant early and long time behavior of disturbances in these flows is given for continuous velocity profiles. It is shown that the early time transients are always algebraic whereas the asymptotic limit is that of an exponential normal mode. Numerical treatment of the new governing equations confirm the conclusions reached by use of the piecewise linear basic models. Although not pursued here, feedback mechanisms designed for control of the flow could be devised using the results of this work.
New method to calculate the N2 evolution from mixed venous blood during the N2 washout.
Han, D; Jeng, D R; Cruz, J C; Flores, X F; Mallea, J M
2001-08-01
To model the normalized phase III slope (Sn) from N2 expirograms of the multibreath N2 washout is a challenge to researchers. Experimental measurements show that Sn increases with the number of breaths. Previously, we predicted Sn by setting the concentration (atm) of mixed venous blood (Fbi,N2) to a constant value of 0.3 after the fifth breath to calculate the amount of N2 transferred from the blood to the alveoli. As a consequence, the predicted curve of the Sn values showed a maximum before the quasi-steady state was reached. In this paper, we present a way of calculating the amount of N2 transferred from the blood to the alveoli by setting Fbi,N2 in the following way: In the first six breaths Fbi,N2 is kept constant at the initial value of 0.8 because circulation time needs at least 30 s to alter it. Thereafter, a single exponential function with respect the number of breaths is used: Fbi = 0.8 exp[0.112(6-n)], in which n is the breath number. The predicted Sn values were compared with experimental data from the literature. The assumption of an exponential decay in the N2 evolved from mixed venous blood is important in determining the shape of the Sn curve but new experimental data are needed to determine the validity of the model. We concluded that this new approach to calculate the N2 evolution from the blood is more meaningful physiologically.
Effects of Frequency Spreads on Beam Breakup Instabilities in Linear Accelerators
1989-05-11
i - 1)A/(N - 1), i = 1, ....N. In keeping with the above model [ cf . Eq. (4)], we assume that there is an equal probability (1/N) for each of these...solution (5), which contains two approximations to (10) and (11) [ cf ., N 4 - limit, and then asymptotic analysis], could accurately model the numerical...that the transition time Tl = 0(1) is the time when the phase mixing factor A0 t is roughly balanced by the BBU exponentiation factoro 1.64W I / 3 [ cf
Jurgens, Bryant C.; Böhlke, J.K.; Eberts, Sandra M.
2012-01-01
TracerLPM is an interactive Excel® (2007 or later) workbook program for evaluating groundwater age distributions from environmental tracer data by using lumped parameter models (LPMs). Lumped parameter models are mathematical models of transport based on simplified aquifer geometry and flow configurations that account for effects of hydrodynamic dispersion or mixing within the aquifer, well bore, or discharge area. Five primary LPMs are included in the workbook: piston-flow model (PFM), exponential mixing model (EMM), exponential piston-flow model (EPM), partial exponential model (PEM), and dispersion model (DM). Binary mixing models (BMM) can be created by combining primary LPMs in various combinations. Travel time through the unsaturated zone can be included as an additional parameter. TracerLPM also allows users to enter age distributions determined from other methods, such as particle tracking results from numerical groundwater-flow models or from other LPMs not included in this program. Tracers of both young groundwater (anthropogenic atmospheric gases and isotopic substances indicating post-1940s recharge) and much older groundwater (carbon-14 and helium-4) can be interpreted simultaneously so that estimates of the groundwater age distribution for samples with a wide range of ages can be constrained. TracerLPM is organized to permit a comprehensive interpretive approach consisting of hydrogeologic conceptualization, visual examination of data and models, and best-fit parameter estimation. Groundwater age distributions can be evaluated by comparing measured and modeled tracer concentrations in two ways: (1) multiple tracers analyzed simultaneously can be evaluated against each other for concordance with modeled concentrations (tracer-tracer application) or (2) tracer time-series data can be evaluated for concordance with modeled trends (tracer-time application). Groundwater-age estimates can also be obtained for samples with a single tracer measurement at one point in time; however, prior knowledge of an appropriate LPM is required because the mean age is often non-unique. LPM output concentrations depend on model parameters and sample date. All of the LPMs have a parameter for mean age. The EPM, PEM, and DM have an additional parameter that characterizes the degree of age mixing in the sample. BMMs have a parameter for the fraction of the first component in the mixture. An LPM, together with its parameter values, provides a description of the age distribution or the fractional contribution of water for every age of recharge contained within a sample. For the PFM, the age distribution is a unit pulse at one distinct age. For the other LPMs, the age distribution can be much broader and span decades, centuries, millennia, or more. For a sample with a mixture of groundwater ages, the reported interpretation of tracer data includes the LPM name, the mean age, and the values of any other independent model parameters. TracerLPM also can be used for simulating the responses of wells, springs, streams, or other groundwater discharge receptors to nonpoint-source contaminants that are introduced in recharge, such as nitrate. This is done by combining an LPM or user-defined age distribution with information on contaminant loading at the water table. Information on historic contaminant loading can be used to help evaluate a model's ability to match real world conditions and understand observed contaminant trends, while information on future contaminant loading scenarios can be used to forecast potential contaminant trends.
Lump solutions and interaction phenomenon to the third-order nonlinear evolution equation
NASA Astrophysics Data System (ADS)
Kofane, T. C.; Fokou, M.; Mohamadou, A.; Yomba, E.
2017-11-01
In this work, the lump solution and the kink solitary wave solution from the (2 + 1) -dimensional third-order evolution equation, using the Hirota bilinear method are obtained through symbolic computation with Maple. We have assumed that the lump solution is centered at the origin, when t = 0 . By considering a mixing positive quadratic function with exponential function, as well as a mixing positive quadratic function with hyperbolic cosine function, interaction solutions like lump-exponential and lump-hyperbolic cosine are presented. A completely non-elastic interaction between a lump and kink soliton is observed, showing that a lump solution can be swallowed by a kink soliton.
NASA Astrophysics Data System (ADS)
Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian
2018-03-01
The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.
Why fast magnetic reconnection is so prevalent
NASA Astrophysics Data System (ADS)
Boozer, Allen H.
2018-02-01
Evolving magnetic fields are shown to generically reach a state of fast magnetic reconnection in which magnetic field line connections change and magnetic energy is released at an Alfvénic rate. This occurs even in plasmas with zero resistivity; only the finiteness of the mass of the lightest charged particle, an electron, is required. The speed and prevalence of Alfvénic or fast magnetic reconnection imply that its cause must be contained within the ideal evolution equation for magnetic fields, , where is the velocity of the magnetic field lines. For a generic , neighbouring magnetic field lines develop a separation that increases exponentially, as \\unicode[STIX]{x1D70E(\\ell ,t)}$ with the distance along a line. This exponentially enhances the sensitivity of the evolution to non-ideal effects. An analogous effect, the importance of stirring to produce a large-scale flow and enhance mixing, has been recognized by cooks through many millennia, but the importance of the large-scale flow to reconnection is customarily ignored. In part this is due to the sixty-year focus of recognition theory on two-coordinate models, which eliminate the exponential enhancement that is generic with three coordinates. A simple three-coordinate model is developed, which could be used to address many unanswered questions.
A mixing evolution model for bidirectional microblog user networks
NASA Astrophysics Data System (ADS)
Yuan, Wei-Guo; Liu, Yun
2015-08-01
Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.
Microplate-based method for high-throughput screening of microalgae growth potential.
Van Wagenen, Jon; Holdt, Susan Løvstad; De Francisci, Davide; Valverde-Pérez, Borja; Plósz, Benedek Gy; Angelidaki, Irini
2014-10-01
Microalgae cultivation conditions in microplates will differ from large-scale photobioreactors in crucial parameters such as light profile, mixing and gas transfer. Hence volumetric productivity (P(v)) measurements made in microplates cannot be directly scaled up. Here we demonstrate that it is possible to use microplates to measure characteristic exponential growth rates and determine the specific growth rate light intensity dependency (μ-I curve), which is useful as the key input for several models that predict P(v). Nannochloropsis salina and Chlorella sorokiniana specific growth rates were measured by repeated batch culture in microplates supplied with continuous light at different intensities. Exponential growth unlimited by gas transfer or self-shading was observable for a period of several days using fluorescence, which is an order of magnitude more sensitive than optical density. The microplate datasets were comparable to similar datasets obtained in photobioreactors and were used an input for the Huesemann model to accurately predict P(v). Copyright © 2014 Elsevier Ltd. All rights reserved.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
Impulsive synchronization of stochastic reaction-diffusion neural networks with mixed time delays.
Sheng, Yin; Zeng, Zhigang
2018-07-01
This paper discusses impulsive synchronization of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and hybrid time delays. By virtue of inequality techniques, theories of stochastic analysis, linear matrix inequalities, and the contradiction method, sufficient criteria are proposed to ensure exponential synchronization of the addressed stochastic reaction-diffusion neural networks with mixed time delays via a designed impulsive controller. Compared with some recent studies, the neural network models herein are more general, some restrictions are relaxed, and the obtained conditions enhance and generalize some published ones. Finally, two numerical simulations are performed to substantiate the validity and merits of the developed theoretical analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.
Turbulent combustion in aluminum-air clouds for different scale explosion fields
NASA Astrophysics Data System (ADS)
Kuhl, Allen L.; Balakrishnan, Kaushik; Bell, John B.; Beckner, Vincent E.
2017-01-01
This paper explores "scaling issues" associated with Al particle combustion in explosions. The basic idea is the following: in this non-premixed combustion system, the global burning rate is controlled by rate of turbulent mixing of fuel (Al particles) with air. From similarity considerations, the turbulent mixing rates should scale with the explosion length and time scales. However, the induction time for ignition of Al particles depends on an Arrhenius function, which is independent of the explosion length and time. To study this, we have performed numerical simulations of turbulent combustion in unconfined Al-SDF (shock-dispersed-fuel) explosion fields at different scales. Three different charge masses were assumed: 1-g, 1-kg and 1-T Al-powder charges. We found that there are two combustion regimes: an ignition regime—where the burning rate decays as a power-law function of time, and a turbulent combustion regime—where the burning rate decays exponentially with time. This exponential dependence is typical of first order reactions and the more general concept of Life Functions that control the dynamics of evolutionary systems. Details of the combustion model are described. Results, including mean and rms profiles in combustion cloud and fuel consumption histories, are presented.
Statistical steady states in turbulent droplet condensation
NASA Astrophysics Data System (ADS)
Bec, Jeremie; Krstulovic, Giorgio; Siewert, Christoph
2017-11-01
We investigate the general problem of turbulent condensation. Using direct numerical simulations we show that the fluctuations of the supersaturation field offer different conditions for the growth of droplets which evolve in time due to turbulent transport and mixing. This leads to propose a Lagrangian stochastic model consisting of a set of integro-differential equations for the joint evolution of the squared radius and the supersaturation along droplet trajectories. The model has two parameters fixed by the total amount of water and the thermodynamic properties, as well as the Lagrangian integral timescale of the turbulent supersaturation. The model reproduces very well the droplet size distributions obtained from direct numerical simulations and their time evolution. A noticeable result is that, after a stage where the squared radius simply diffuses, the system converges exponentially fast to a statistical steady state independent of the initial conditions. The main mechanism involved in this convergence is a loss of memory induced by a significant number of droplets undergoing a complete evaporation before growing again. The statistical steady state is characterised by an exponential tail in the droplet mass distribution.
Lamination and mixing in laminar flows driven by Lorentz body forces
NASA Astrophysics Data System (ADS)
Rossi, L.; Doorly, D.; Kustrin, D.
2012-01-01
We present a new approach to the design of mixers. This approach relies on a sequence of tailored flows coupled with a new procedure to quantify the local degree of striation, called lamination. Lamination translates to the distance over which the molecular diffusion needs to act to finalise mixing. A novel in situ mixing is achieved by the tailored sequence of flows. This sequence is shown with the property that material lines and lamination grow exponentially, according to processes akin to the well-known baker's map. The degree of mixing (stirring coefficient) likewise shows exponential growth before the saturation of the stirring rate. Such saturation happens when the typical striations' thickness is smaller than the diffusion's length scale. Moreover, without molecular diffusion, the predicted striations' thickness would be smaller than the size of an atom of hydrogen within 40 flow turnover times. In fact, we conclude that about 3 minutes, i.e. 15 turnover times, are sufficient to mix species with very low diffusivities, e.g. suspensions of virus, bacteria, human cells, and DNA.
Parametric resonant triad interactions in a free shear layer
NASA Technical Reports Server (NTRS)
Mallier, R.; Maslowe, S. A.
1993-01-01
We investigate the weakly nonlinear evolution of a triad of nearly-neutral modes superimposed on a mixing layer with velocity profile u bar equals Um + tanh y. The perturbation consists of a plane wave and a pair of oblique waves each inclined at approximately 60 degrees to the mean flow direction. Because the evolution occurs on a relatively fast time scale, the critical layer dynamics dominate the process and the amplitude evolution of the oblique waves is governed by an integro-differential equation. The long-time solution of this equation predicts very rapid (exponential of an exponential) amplification and we discuss the pertinence of this result to vortex pairing phenomena in mixing layers.
Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems
NASA Astrophysics Data System (ADS)
Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen
2017-06-01
In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
Theoretical Studies of Spectroscopic Line Mixing in Remote Sensing Applications
NASA Astrophysics Data System (ADS)
Ma, Q.
2015-12-01
The phenomenon of collisional transfer of intensity due to line mixing has an increasing importance for atmospheric monitoring. From a theoretical point of view, all relevant information about the collisional processes is contained in the relaxation matrix where the diagonal elements give half-widths and shifts, and the off-diagonal elements correspond to line interferences. For simple systems such as those consisting of diatom-atom or diatom-diatom, accurate fully quantum calculations based on interaction potentials are feasible. However, fully quantum calculations become unrealistic for more complex systems. On the other hand, the semi-classical Robert-Bonamy (RB) formalism, which has been widely used to calculate half-widths and shifts for decades, fails in calculating the off-diagonal matrix elements. As a result, in order to simulate atmospheric spectra where the effects from line mixing are important, semi-empirical fitting or scaling laws such as the ECS and IOS models are commonly used. Recently, while scrutinizing the development of the RB formalism, we have found that these authors applied the isolated line approximation in their evaluating matrix elements of the Liouville scattering operator given in exponential form. Since the criterion of this assumption is so stringent, it is not valid for many systems of interest in atmospheric applications. Furthermore, it is this assumption that blocks the possibility to calculate the whole relaxation matrix at all. By eliminating this unjustified application, and accurately evaluating matrix elements of the exponential operators, we have developed a more capable formalism. With this new formalism, we are now able not only to reduce uncertainties for calculated half-widths and shifts, but also to remove a once insurmountable obstacle to calculate the whole relaxation matrix. This implies that we can address the line mixing with the semi-classical theory based on interaction potentials between molecular absorber and molecular perturber. We have applied this formalism to address the line mixing for Raman and infrared spectra of molecules such as N2, C2H2, CO2, NH3, and H2O. By carrying out rigorous calculations, our calculated relaxation matrices are in good agreement with both experimental data and results derived from the ECS model.
Katz, B.G.; Chelette, A.R.; Pratt, T.R.
2004-01-01
Concerns regarding ground-water contamination in the Woodville Karst Plain have arisen due to a steady increase in nitrate-N concentrations (0.25-0.90 mg/l) during the past 30 years in Wakulla Springs, a large regional discharge point for water (9.6 m3/s) from the Upper Floridan aquifer (UFA). Multiple isotopic and chemical tracers were used with geochemical and lumped-parameter models (exponential mixing (EM), dispersion, and combined exponential piston flow) to assess: (1) the sources and extent of nitrate contamination of ground water and springs, and (2) mean transit times (ages) of ground water. Delta 15N-NO3 values (1.7-13.8???) indicated that nitrate in ground water originated from localized sources of inorganic fertilizer and human/animal wastes. Nitrate in spring waters (??15N-NO3=5.3-8.9???) originated from both inorganic and organic N sources. Nitrate-N concentrations (1.0 mg/l) were associated with shallow wells (open intervals less than 15 m below land surface), elevated nitrate concentrations in deeper wells are consistent with mixtures of water from shallow and deep zones in the UFA as indicated from geochemical mixing models and the distribution of mean transit times (5-90 years) estimated using lumped-parameter flow models. Ground water with mean transit times of 10 years or less tended to have higher dissolved organic carbon concentrations, lower dissolved solids, and lower calcite saturation indices than older waters, indicating mixing with nearby surface water that directly recharges the aquifer through sinkholes. Significantly higher values of pH, magnesium, dolomite saturation index, and phosphate in springs and deep water (>45 m) relative to a shallow zone (<45 m) were associated with longer ground-water transit times (50-90 years). Chemical differences with depth in the aquifer result from deep regional flow of water recharged through low permeability sediments (clays and clayey sands of the Hawthorn Formation) that overlie the UFA upgradient from the karst plain.
Extracting a mix parameter from 2D radiography of variable density flow
NASA Astrophysics Data System (ADS)
Kurien, Susan; Doss, Forrest; Livescu, Daniel
2017-11-01
A methodology is presented for extracting quantities related to the statistical description of the mixing state from the 2D radiographic image of a flow. X-ray attenuation through a target flow is given by the Beer-Lambert law which exponentially damps the incident beam intensity by a factor proportional to the density, opacity and thickness of the target. By making reasonable assumptions for the mean density, opacity and effective thickness of the target flow, we estimate the contribution of density fluctuations to the attenuation. The fluctuations thus inferred may be used to form the correlation of density and specific-volume, averaged across the thickness of the flow in the direction of the beam. This correlation function, denoted by b in RANS modeling, quantifies turbulent mixing in variable density flows. The scheme is tested using DNS data computed for variable-density buoyancy-driven mixing. We quantify the deficits in the extracted value of b due to target thickness, Atwood number, and modeled noise in the incident beam. This analysis corroborates the proposed scheme to infer the mix parameter from thin targets at moderate to low Atwood numbers. The scheme is then applied to an image of counter-shear flow obtained from experiments at the National Ignition Facility. US Department of Energy.
Vertical variation of mixing within porous sediment beds below turbulent flows
Chandler, I. D.; Pearson, J. M.; van Egmond, R.
2016-01-01
Abstract River ecosystems are influenced by contaminants in the water column, in the pore water and adsorbed to sediment particles. When exchange across the sediment‐water interface (hyporheic exchange) is included in modeling, the mixing coefficient is often assumed to be constant with depth below the interface. Novel fiber‐optic fluorometers have been developed and combined with a modified EROSIMESS system to quantify the vertical variation in mixing coefficient with depth below the sediment‐water interface. The study considered a range of particle diameters and bed shear velocities, with the permeability Péclet number, PeK between 1000 and 77,000 and the shear Reynolds number, Re*, between 5 and 600. Different parameterization of both an interface exchange coefficient and a spatially variable in‐sediment mixing coefficient are explored. The variation of in‐sediment mixing is described by an exponential function applicable over the full range of parameter combinations tested. The empirical relationship enables estimates of the depth to which concentrations of pollutants will penetrate into the bed sediment, allowing the region where exchange will occur faster than molecular diffusion to be determined. PMID:27635104
Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels
NASA Astrophysics Data System (ADS)
Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan
2017-12-01
This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.
Scale dependence of entrainment-mixing mechanisms in cumulus clouds
Lu, Chunsong; Liu, Yangang; Niu, Shengjie; ...
2014-12-17
This work empirically examines the dependence of entrainment-mixing mechanisms on the averaging scale in cumulus clouds using in situ aircraft observations during the Routine Atmospheric Radiation Measurement Aerial Facility Clouds with Low Optical Water Depths Optical Radiative Observations (RACORO) field campaign. A new measure of homogeneous mixing degree is defined that can encompass all types of mixing mechanisms. Analysis of the dependence of the homogenous mixing degree on the averaging scale shows that, on average, the homogenous mixing degree decreases with increasing averaging scales, suggesting that apparent mixing mechanisms gradually approach from homogeneous mixing to extreme inhomogeneous mixing with increasingmore » scales. The scale dependence can be well quantified by an exponential function, providing first attempt at developing a scale-dependent parameterization for the entrainment-mixing mechanism. The influences of three factors on the scale dependence are further examined: droplet-free filament properties (size and fraction), microphysical properties (mean volume radius and liquid water content of cloud droplet size distributions adjacent to droplet-free filaments), and relative humidity of entrained dry air. It is found that the decreasing rate of homogeneous mixing degree with increasing averaging scales becomes larger with larger droplet-free filament size and fraction, larger mean volume radius and liquid water content, or higher relative humidity. The results underscore the necessity and possibility of considering averaging scale in representation of entrainment-mixing processes in atmospheric models.« less
Stability of post-fertilization traveling waves
NASA Astrophysics Data System (ADS)
Flores, Gilberto; Plaza, Ramón G.
This paper studies the stability of a family of traveling wave solutions to the system proposed by Lane et al. [D.C. Lane, J.D. Murray, V.S. Manoranjan, Analysis of wave phenomena in a morphogenetic mechanochemical model and an application to post-fertilization waves on eggs, IMA J. Math. Appl. Med. Biol. 4 (4) (1987) 309-331], to model a pair of mechanochemical phenomena known as post-fertilization waves on eggs. The waves consist of an elastic deformation pulse on the egg's surface, and a free calcium concentration front. The family is indexed by a coupling parameter measuring contraction stress effects on the calcium concentration. This work establishes the spectral, linear and nonlinear orbital stability of these post-fertilization waves for small values of the coupling parameter. The usual methods for the spectral and evolution equations cannot be applied because of the presence of mixed partial derivatives in the elastic equation. Nonetheless, exponential decay of the directly constructed semigroup on the complement of the zero eigenspace is established. We show that small perturbations of the waves yield solutions to the nonlinear equations decaying exponentially to a phase-modulated traveling wave.
Markov chains at the interface of combinatorics, computing, and statistical physics
NASA Astrophysics Data System (ADS)
Streib, Amanda Pascoe
The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is always rapidly mixing for two general classes of positively biased { pxy}. More significantly, we also prove that the general conjecture is false by exhibiting values for the pxy, with 1/2 ≤ pxy ≤ 1 for all x < y, but for which the transposition chain will require exponential time to converge. Finally, we consider a model of colloids, which are binary mixtures of molecules with one type of molecule suspended in another. It is believed that at low density typical configurations will be well-mixed throughout, while at high density they will separate into clusters. This clustering has proved elusive to verify, since all local sampling algorithms are known to be inefficient at high density, and in fact a new nonlocal algorithm was recently shown to require exponential time in some cases. We characterize the high and low density phases for a general family of discrete interfering binary mixtures by showing that they exhibit a "clustering property" at high density and not at low density. The clustering property states that there will be a region that has very high area, very small perimeter, and high density of one type of molecule. Special cases of interfering binary mixtures include the Ising model at fixed magnetization and independent sets.
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
NASA Astrophysics Data System (ADS)
Sargsyan, M. Z.; Poghosyan, H. M.
2018-04-01
A dynamical problem for a rectangular strip with variable coefficients of elasticity is solved by an asymptotic method. It is assumed that the strip is orthotropic, the elasticity coefficients are exponential functions of y, and mixed boundary conditions are posed. The solution of the inner problem is obtained using Bessel functions.
Phase mixing of Alfvén waves in axisymmetric non-reflective magnetic plasma configurations
NASA Astrophysics Data System (ADS)
Petrukhin, N. S.; Ruderman, M. S.; Shurgalina, E. G.
2018-02-01
We study damping of phase-mixed Alfvén waves propagating in non-reflective axisymmetric magnetic plasma configurations. We derive the general equation describing the attenuation of the Alfvén wave amplitude. Then we applied the general theory to a particular case with the exponentially divergent magnetic field lines. The condition that the configuration is non-reflective determines the variation of the plasma density along the magnetic field lines. The density profiles exponentially decreasing with the height are not among non-reflective density profiles. However, we managed to find non-reflective profiles that fairly well approximate exponentially decreasing density. We calculate the variation of the total wave energy flux with the height for various values of shear viscosity. We found that to have a substantial amount of wave energy dissipated at the lower corona, one needs to increase shear viscosity by seven orders of magnitude in comparison with the value given by the classical plasma theory. An important result that we obtained is that the efficiency of the wave damping strongly depends on the density variation with the height. The stronger the density decrease, the weaker the wave damping is. On the basis of this result, we suggested a physical explanation of the phenomenon of the enhanced wave damping in equilibrium configurations with exponentially diverging magnetic field lines.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
Verma, Sadhna; Sarkar, Saradwata; Young, Jason; Venkataraman, Rajesh; Yang, Xu; Bhavsar, Anil; Patil, Nilesh; Donovan, James; Gaitonde, Krishnanath
2016-05-01
The purpose of this study was to compare high b-value (b = 2000 s/mm(2)) acquired diffusion-weighted imaging (aDWI) with computed DWI (cDWI) obtained using four diffusion models-mono-exponential (ME), intra-voxel incoherent motion (IVIM), stretched exponential (SE), and diffusional kurtosis (DK)-with respect to lesion visibility, conspicuity, contrast, and ability to predict significant prostate cancer (PCa). Ninety four patients underwent 3 T MRI including acquisition of b = 2000 s/mm(2) aDWI and low b-value DWI. High b = 2000 s/mm(2) cDWI was obtained using ME, IVIM, SE, and DK models. All images were scored on quality independently by three radiologists. Lesions were identified on all images and graded for lesion conspicuity. For a subset of lesions for which pathological truth was established, lesion-to-background contrast ratios (LBCRs) were computed and binomial generalized linear mixed model analysis was conducted to compare clinically significant PCa predictive capabilities of all DWI. For all readers and all models, cDWI demonstrated higher ratings for image quality and lesion conspicuity than aDWI except DK (p < 0.001). The LBCRs of ME, IVIM, and SE were significantly higher than LBCR of aDWI (p < 0.001). Receiver Operating Characteristic curves obtained from binomial generalized linear mixed model analysis demonstrated higher Area Under the Curves for ME, SE, IVIM, and aDWI compared to DK or PSAD alone in predicting significant PCa. High b-value cDWI using ME, IVIM, and SE diffusion models provide better image quality, lesion conspicuity, and increased LBCR than high b-value aDWI. Using cDWI can potentially provide comparable sensitivity and specificity for detecting significant PCa as high b-value aDWI without increased scan times and image degradation artifacts.
NASA Astrophysics Data System (ADS)
Leifeld, Philip
2018-10-01
Academic collaboration in the social sciences is characterized by a polarization between hermeneutic and nomological researchers. This polarization is expressed in different publication strategies. The present article analyzes the complete co-authorship networks in a social science discipline in two separate countries over five years using an exponential random graph model. It examines whether and how assortative mixing in publication strategies is present and leads to a polarization in scientific collaboration. In the empirical analysis, assortative mixing is found to play a role in shaping the topology of the network and significantly explains collaboration. Co-authorship edges are more prevalent within each of the groups, but this mixing pattern does not fully account for the extent of polarization. Instead, a thought experiment reveals that other components of the complex system dampen or amplify polarization in the data-generating process and that microscopic interventions targeting behavior change with regard to assortativity would be hindered by the resilience of the system. The resilience to interventions is quantified in a series of simulations on the effect of microscopic behavior on macroscopic polarization. The empirical study controls for geographic proximity, supervision, and topical similarity (using a vector space model), and the interplay of these factors is likely responsible for this resilience. The paper also predicts the co-authorship network in one country based on the model of collaborations in the other country.
The importance of fluctuations in fluid mixing.
Kadau, Kai; Rosenblatt, Charles; Barber, John L; Germann, Timothy C; Huang, Zhibin; Carlès, Pierre; Alder, Berni J
2007-05-08
A ubiquitous example of fluid mixing is the Rayleigh-Taylor instability, in which a heavy fluid initially sits atop a light fluid in a gravitational field. The subsequent development of the unstable interface between the two fluids is marked by several stages. At first, each interface mode grows exponentially with time before transitioning to a nonlinear regime characterized by more complex hydrodynamic mixing. Unfortunately, traditional continuum modeling of this process has generally been in poor agreement with experiment. Here, we indicate that the natural, random fluctuations of the flow field present in any fluid, which are neglected in continuum models, can lead to qualitatively and quantitatively better agreement with experiment. We performed billion-particle atomistic simulations and magnetic levitation experiments with unprecedented control of initial interface conditions. A comparison between our simulations and experiments reveals good agreement in terms of the growth rate of the mixing front as well as the new observation of droplet breakup at later times. These results improve our understanding of many fluid processes, including interface phenomena that occur, for example, in supernovae, the detachment of droplets from a faucet, and ink jet printing. Such instabilities are also relevant to the possible energy source of inertial confinement fusion, in which a millimeter-sized capsule is imploded to initiate nuclear fusion reactions between deuterium and tritium. Our results suggest that the applicability of continuum models would be greatly enhanced by explicitly including the effects of random fluctuations.
NASA Astrophysics Data System (ADS)
Liu, Lei; Li, Yaning
2018-07-01
A methodology was developed to use a hyperelastic softening model to predict the constitutive behavior and the spatial damage propagation of nonlinear materials with damage-induced softening under mixed-mode loading. A user subroutine (ABAQUS/VUMAT) was developed for numerical implementation of the model. 3D-printed wavy soft rubbery interfacial layer was used as a material system to verify and validate the methodology. The Arruda - Boyce hyperelastic model is incorporated with the softening model to capture the nonlinear pre-and post- damage behavior of the interfacial layer under mixed Mode I/II loads. To characterize model parameters of the 3D-printed rubbery interfacial layer, a series of scarf-joint specimens were designed, which enabled systematic variation of stress triaxiality via a single geometric parameter, the slant angle. It was found that the important model parameter m is exponentially related to the stress triaxiality. Compact tension specimens of the sinusoidal wavy interfacial layer with different waviness were designed and fabricated via multi-material 3D printing. Finite element (FE) simulations were conducted to predict the spatial damage propagation of the material within the wavy interfacial layer. Compact tension experiments were performed to verify the model prediction. The results show that the model developed is able to accurately predict the damage propagation of the 3D-printed rubbery interfacial layer under complicated stress-state without pre-defined failure criteria.
NASA Astrophysics Data System (ADS)
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
NASA Astrophysics Data System (ADS)
Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.
2018-03-01
This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.
Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar
2018-02-01
This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Theoretical Studies of Spectroscopic Line Mixing in Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Ma, Q.; Boulet, C.; Tipping, R. H.
2015-01-01
The phenomenon of collisional transfer of intensity due to line mixing has an increasing importance for atmospheric monitoring. From a theoretical point of view, all relevant information about the collisional processes is contained in the relaxation matrix where the diagonal elements give half-widths and shifts, and the off-diagonal elements correspond to line interferences. For simple systems such as those consisting of diatom-atom or diatom-diatom, accurate fully quantum calculations based on interaction potentials are feasible. However, fully quantum calculations become unrealistic for more complex systems. On the other hand, the semi-classical Robert-Bonamy (RB) formalism, which has been widely used to calculate half-widths and shifts for decades, fails in calculating the off-diagonal matrix elements. As a result, in order to simulate atmospheric spectra where the effects from line mixing are important, semi-empirical fitting or scaling laws such as the ECS (Energy-Corrected Sudden) and IOS (Infinite-Order Sudden) models are commonly used. Recently, while scrutinizing the development of the RB formalism, we have found that these authors applied the isolated line approximation in their evaluating matrix elements of the Liouville scattering operator given in exponential form. Since the criterion of this assumption is so stringent, it is not valid for many systems of interest in atmospheric applications. Furthermore, it is this assumption that blocks the possibility to calculate the whole relaxation matrix at all. By eliminating this unjustified application, and accurately evaluating matrix elements of the exponential operators, we have developed a more capable formalism. With this new formalism, we are now able not only to reduce uncertainties for calculated half-widths and shifts, but also to remove a once insurmountable obstacle to calculate the whole relaxation matrix. This implies that we can address the line mixing with the semi-classical theory based on interaction potentials between molecular absorber and molecular perturber. We have applied this formalism to address the line mixing for Raman and infrared spectra of molecules such as N2, C2H2, CO2, NH3, and H2O. By carrying out rigorous calculations, our calculated relaxation matrices are in good agreement with both experimental data and results derived from the ECS model.
Division of Labor, Bet Hedging, and the Evolution of Mixed Biofilm Investment Strategies
McNally, Luke; Ratcliff, William C.
2017-01-01
ABSTRACT Bacterial cells, like many other organisms, face a tradeoff between longevity and fecundity. Planktonic cells are fast growing and fragile, while biofilm cells are often slower growing but stress resistant. Here we ask why bacterial lineages invest simultaneously in both fast- and slow-growing types. We develop a population dynamic model of lineage expansion across a patchy environment and find that mixed investment is favored across a broad range of environmental conditions, even when transmission is entirely via biofilm cells. This mixed strategy is favored because of a division of labor where exponentially dividing planktonic cells can act as an engine for the production of future biofilm cells, which grow more slowly. We use experimental evolution to test our predictions and show that phenotypic heterogeneity is persistent even under selection for purely planktonic or purely biofilm transmission. Furthermore, simulations suggest that maintenance of a biofilm subpopulation serves as a cost-effective hedge against environmental uncertainty, which is also consistent with our experimental findings. PMID:28790201
Characterizing the reproduction number of epidemics with early subexponential growth dynamics
Viboud, Cécile; Simonsen, Lone; Moghadas, Seyed M.
2016-01-01
Early estimates of the transmission potential of emerging and re-emerging infections are increasingly used to inform public health authorities on the level of risk posed by outbreaks. Existing methods to estimate the reproduction number generally assume exponential growth in case incidence in the first few disease generations, before susceptible depletion sets in. In reality, outbreaks can display subexponential (i.e. polynomial) growth in the first few disease generations, owing to clustering in contact patterns, spatial effects, inhomogeneous mixing, reactive behaviour changes or other mechanisms. Here, we introduce the generalized growth model to characterize the early growth profile of outbreaks and estimate the effective reproduction number, with no need for explicit assumptions about the shape of epidemic growth. We demonstrate this phenomenological approach using analytical results and simulations from mechanistic models, and provide validation against a range of empirical disease datasets. Our results suggest that subexponential growth in the early phase of an epidemic is the rule rather the exception. Mechanistic simulations show that slight modifications to the classical susceptible–infectious–removed model result in subexponential growth, and in turn a rapid decline in the reproduction number within three to five disease generations. For empirical outbreaks, the generalized-growth model consistently outperforms the exponential model for a variety of directly and indirectly transmitted diseases datasets (pandemic influenza, measles, smallpox, bubonic plague, cholera, foot-and-mouth disease, HIV/AIDS and Ebola) with model estimates supporting subexponential growth dynamics. The rapid decline in effective reproduction number predicted by analytical results and observed in real and synthetic datasets within three to five disease generations contrasts with the expectation of invariant reproduction number in epidemics obeying exponential growth. The generalized-growth concept also provides us a compelling argument for the unexpected extinction of certain emerging disease outbreaks during the early ascending phase. Overall, our approach promotes a more reliable and data-driven characterization of the early epidemic phase, which is important for accurate estimation of the reproduction number and prediction of disease impact. PMID:27707909
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications
Austin, Peter C.
2017-01-01
Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.
Austin, Peter C
2017-08-01
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).
Selby, Edward A; Kranzler, Amy; Panza, Emily; Fehling, Kara B
2016-04-01
Influenced by chaos theory, the emotional cascade model proposes that rumination and negative emotion may promote each other in a self-amplifying cycle that increases over time. Accordingly, exponential-compounding effects may better describe the relationship between rumination and negative emotion when they occur in impulsive persons, and predict impulsive behavior. Forty-seven community and undergraduate participants who reported frequent engagement in impulsive behaviors monitored their ruminative thoughts and negative emotion multiple times daily for two weeks using digital recording devices. Hypotheses were tested using cross-lagged mixed model analyses. Findings indicated that rumination predicted subsequent elevations in rumination that lasted over extended periods of time. Rumination and negative emotion predicted increased levels of each other at subsequent assessments, and exponential functions for these associations were supported. Results also supported a synergistic effect between rumination and negative emotion, predicting larger elevations in subsequent rumination and negative emotion than when one variable alone was elevated. Finally, there were synergistic effects of rumination and negative emotion in predicting number of impulsive behaviors subsequently reported. These findings are consistent with the emotional cascade model in suggesting that momentary rumination and negative emotion progressively propagate and magnify each other over time in impulsive people, promoting impulsive behavior. © 2014 Wiley Periodicals, Inc.
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
Yang, Rujun; Su, Han; Qu, Shenglu; Wang, Xuchen
2017-05-03
The iron binding capacities (IBC) of fulvic acid (FA) and humic acid (HA) were determined in the salinity range from 5 to 40. The results indicated that IBC decreased while salinity increased. In addition, dissolved iron (dFe), FA and HA were also determined along the Yangtze River estuary's increasing salinity gradient from 0.14 to 33. The loss rates of dFe, FA and HA in the Yangtze River estuary were up to 96%, 74%, and 67%, respectively. The decreases in dFe, FA and HA, as well as the change in IBC of humic substances (HS) along the salinity gradient in the Yangtze River estuary were all well described by a first-order exponential attenuation model: y(dFe/FA/HA, S) = a 0 × exp(kS) + y 0 . These results indicate that flocculation of FA and HA along the salinity gradient resulted in removal of dFe. Furthermore, the exponential attenuation model described in this paper can be applied in the major estuaries of the world where most of the removal of dFe and HS occurs where freshwater and seawater mix.
Cade, W Todd; Nabar, Sharmila R; Keyser, Randall E
2004-05-01
The purpose of this study was to determine the reproducibility of the indirect Fick method for the measurement of mixed venous carbon dioxide partial pressure (P(v)CO(2)) and venous carbon dioxide content (C(v)CO(2)) for estimation of cardiac output (Q(c)), using the exponential rise method of carbon dioxide rebreathing, during non-steady-state treadmill exercise. Ten healthy participants (eight female and two male) performed three incremental, maximal exercise treadmill tests to exhaustion within 1 week. Non-invasive Q(c) measurements were evaluated at rest, during each 3-min stage, and at peak exercise, across three identical treadmill tests, using the exponential rise technique for measuring mixed venous PCO(2) and CCO(2) and estimating venous-arterio carbon dioxide content difference (C(v-a)CO(2)). Measurements were divided into measured or estimated variables [heart rate (HR), oxygen consumption (VO(2)), volume of expired carbon dioxide (VCO(2)), end-tidal carbon dioxide (P(ET)CO(2)), arterial carbon dioxide partial pressure (P(a)CO(2)), venous carbon dioxide partial pressure ( P(v)CO(2)), and C(v-a)CO(2)] and cardiorespiratory variables derived from the measured variables [Q(c), stroke volume (V(s)), and arteriovenous oxygen difference ( C(a-v)O(2))]. In general, the derived cardiorespiratory variables demonstrated acceptable (R=0.61) to high (R>0.80) reproducibility, especially at higher intensities and peak exercise. Measured variables, excluding P(a)CO(2) and C(v-a)CO(2), also demonstrated acceptable (R=0.6 to 0.79) to high reliability. The current study demonstrated acceptable to high reproducibility of the exponential rise indirect Fick method in measurement of mixed venous PCO(2) and CCO(2) for estimation of Q(c) during incremental treadmill exercise testing, especially at high-intensity and peak exercise.
The Supermarket Model with Bounded Queue Lengths in Equilibrium
NASA Astrophysics Data System (ADS)
Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.
2018-04-01
In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Salje, Ekhard K H; Planes, Antoni; Vives, Eduard
2017-10-01
Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
The importance of fluctuations in fluid mixing
Kadau, Kai; Rosenblatt, Charles; Barber, John L.; Germann, Timothy C.; Huang, Zhibin; Carlès, Pierre; Alder, Berni J.
2007-01-01
A ubiquitous example of fluid mixing is the Rayleigh–Taylor instability, in which a heavy fluid initially sits atop a light fluid in a gravitational field. The subsequent development of the unstable interface between the two fluids is marked by several stages. At first, each interface mode grows exponentially with time before transitioning to a nonlinear regime characterized by more complex hydrodynamic mixing. Unfortunately, traditional continuum modeling of this process has generally been in poor agreement with experiment. Here, we indicate that the natural, random fluctuations of the flow field present in any fluid, which are neglected in continuum models, can lead to qualitatively and quantitatively better agreement with experiment. We performed billion-particle atomistic simulations and magnetic levitation experiments with unprecedented control of initial interface conditions. A comparison between our simulations and experiments reveals good agreement in terms of the growth rate of the mixing front as well as the new observation of droplet breakup at later times. These results improve our understanding of many fluid processes, including interface phenomena that occur, for example, in supernovae, the detachment of droplets from a faucet, and ink jet printing. Such instabilities are also relevant to the possible energy source of inertial confinement fusion, in which a millimeter-sized capsule is imploded to initiate nuclear fusion reactions between deuterium and tritium. Our results suggest that the applicability of continuum models would be greatly enhanced by explicitly including the effects of random fluctuations. PMID:17470811
A connection between mix and adiabat in ICF capsules
NASA Astrophysics Data System (ADS)
Cheng, Baolian; Kwan, Thomas; Wang, Yi-Ming; Yi, Sunghuan (Austin); Batha, Steven
2016-10-01
We study the relationship between instability induced mix, preheat and the adiabat of the deuterium-tritium (DT) fuel in fusion capsule experiments. Our studies show that hydrodynamic instability not only directly affects the implosion, hot spot shape and mix, but also affects the thermodynamics of the capsule, such as, the adiabat of the DT fuel, and, in turn, affects the energy partition between the pusher shell (cold DT) and the hot spot. It was found that the adiabat of the DT fuel is sensitive to the amount of mix caused by Richtmyer-Meshkov (RM) and Rayleigh-Taylor (RT) instabilities at the material interfaces due to its exponential dependence on the fuel entropy. An upper limit of mix allowed maintaining a low adiabat of DT fuel is derived. Additionally we demonstrated that the use of a high adiabat for the DT fuel in theoretical analysis and with the aid of 1D code simulations could explain some aspects of the 3D effects and mix in the capsule experiments. Furthermore, from the observed neutron images and our physics model, we could infer the adiabat of the DT fuel in the capsule and determine the possible amount of mix in the hot spot (LA-UR-16-24880). This work was conducted under the auspices of the U.S. Department of Energy by the Los Alamos National Laboratory under Contract No. W-7405-ENG-36.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Hosseinzadeh, M; Ghoreishi, M; Narooei, K
2016-06-01
In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Turbulent Mixing in Exponential Transverse Jets
1990-09-30
parameter. The flame length of the jets is a direct measurement of the molecular scale mixing rate. ACCOMPLISHMENTS From observations of the trajectory...and cross-sectional size of the vortices, as well as the flame length , our measurements reveal the following: i) Under acceleration, the roll up and... flame lengths are a weak maximum when the acceleration parameter (x is about unity. For large cc, flame lengths slowly decline with increasing a, in
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
A Stochastic Super-Exponential Growth Model for Population Dynamics
NASA Astrophysics Data System (ADS)
Avila, P.; Rekker, A.
2010-11-01
A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.
Lindsey, J C; Ryan, L M
1994-01-01
The three-state illness-death model provides a useful way to characterize data from a rodent tumorigenicity experiment. Most parametrizations proposed recently in the literature assume discrete time for the death process and either discrete or continuous time for the tumor onset process. We compare these approaches with a third alternative that uses a piecewise continuous model on the hazards for tumor onset and death. All three models assume proportional hazards to characterize tumor lethality and the effect of dose on tumor onset and death rate. All of the models can easily be fitted using an Expectation Maximization (EM) algorithm. The piecewise continuous model is particularly appealing in this context because the complete data likelihood corresponds to a standard piecewise exponential model with tumor presence as a time-varying covariate. It can be shown analytically that differences between the parameter estimates given by each model are explained by varying assumptions about when tumor onsets, deaths, and sacrifices occur within intervals. The mixed-time model is seen to be an extension of the grouped data proportional hazards model [Mutat. Res. 24:267-278 (1981)]. We argue that the continuous-time model is preferable to the discrete- and mixed-time models because it gives reasonable estimates with relatively few intervals while still making full use of the available information. Data from the ED01 experiment illustrate the results. PMID:8187731
Dynamic autoinoculation and the microbial ecology of a deep water hydrocarbon irruption
Valentine, David L.; Mezić, Igor; Maćešić, Senka; Črnjarić-Žic, Nelida; Ivić, Stefan; Hogan, Patrick J.; Fonoberov, Vladimir A.; Loire, Sophie
2012-01-01
The irruption of gas and oil into the Gulf of Mexico during the Deepwater Horizon event fed a deep sea bacterial bloom that consumed hydrocarbons in the affected waters, formed a regional oxygen anomaly, and altered the microbiology of the region. In this work, we develop a coupled physical–metabolic model to assess the impact of mixing processes on these deep ocean bacterial communities and their capacity for hydrocarbon and oxygen use. We find that observed biodegradation patterns are well-described by exponential growth of bacteria from seed populations present at low abundance and that current oscillation and mixing processes played a critical role in distributing hydrocarbons and associated bacterial blooms within the northeast Gulf of Mexico. Mixing processes also accelerated hydrocarbon degradation through an autoinoculation effect, where water masses, in which the hydrocarbon irruption had caused blooms, later returned to the spill site with hydrocarbon-degrading bacteria persisting at elevated abundance. Interestingly, although the initial irruption of hydrocarbons fed successive blooms of different bacterial types, subsequent irruptions promoted consistency in the structure of the bacterial community. These results highlight an impact of mixing and circulation processes on biodegradation activity of bacteria during the Deepwater Horizon event and suggest an important role for mixing processes in the microbial ecology of deep ocean environments. PMID:22233808
Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.
2016-01-01
Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785
1996-09-16
approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in
Numerical renormalization group method for entanglement negativity at finite temperature
NASA Astrophysics Data System (ADS)
Shim, Jeongmin; Sim, H.-S.; Lee, Seung-Sup B.
2018-04-01
We develop a numerical method to compute the negativity, an entanglement measure for mixed states, between the impurity and the bath in quantum impurity systems at finite temperature. We construct a thermal density matrix by using the numerical renormalization group (NRG), and evaluate the negativity by implementing the NRG approximation that reduces computational cost exponentially. We apply the method to the single-impurity Kondo model and the single-impurity Anderson model. In the Kondo model, the negativity exhibits a power-law scaling at temperature much lower than the Kondo temperature and a sudden death at high temperature. In the Anderson model, the charge fluctuation of the impurity contributes to the negativity even at zero temperature when the on-site Coulomb repulsion of the impurity is finite, while at low temperature the negativity between the impurity spin and the bath exhibits the same power-law scaling behavior as in the Kondo model.
Analytically-derived sensitivities in one-dimensional models of solute transport in porous media
Knopman, D.S.
1987-01-01
Analytically-derived sensitivities are presented for parameters in one-dimensional models of solute transport in porous media. Sensitivities were derived by direct differentiation of closed form solutions for each of the odel, and by a time integral method for two of the models. Models are based on the advection-dispersion equation and include adsorption and first-order chemical decay. Boundary conditions considered are: a constant step input of solute, constant flux input of solute, and exponentially decaying input of solute at the upstream boundary. A zero flux is assumed at the downstream boundary. Initial conditions include a constant and spatially varying distribution of solute. One model simulates the mixing of solute in an observation well from individual layers in a multilayer aquifer system. Computer programs produce output files compatible with graphics software in which sensitivities are plotted as a function of either time or space. (USGS)
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
Possible stretched exponential parametrization for humidity absorption in polymers.
Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O
2009-04-01
Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Vacuum statistics and stability in axionic landscapes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masoumi, Ali; Vilenkin, Alexander, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu
2016-03-01
We investigate vacuum statistics and stability in random axionic landscapes. For this purpose we developed an algorithm for a quick evaluation of the tunneling action, which in most cases is accurate within 10%. We find that stability of a vacuum is strongly correlated with its energy density, with lifetime rapidly growing as the energy density is decreased. On the other hand, the probability P(B) for a vacuum to have a tunneling action B greater than a given value declines as a slow power law in B. This is in sharp contrast with the studies of random quartic potentials, which foundmore » a fast exponential decline of P(B). Our results suggest that the total number of relatively stable vacua (say, with B>100) grows exponentially with the number of fields N and can get extremely large for N∼> 100. The problem with this kind of model is that the stable vacua are concentrated near the absolute minimum of the potential, so the observed value of the cosmological constant cannot be explained without fine-tuning. To address this difficulty, we consider a modification of the model, where the axions acquire a quadratic mass term, due to their mixing with 4-form fields. This results in a larger landscape with a much broader distribution of vacuum energies. The number of relatively stable vacua in such models can still be extremely large.« less
Environmental Noise Could Promote Stochastic Local Stability of Behavioral Diversity Evolution
NASA Astrophysics Data System (ADS)
Zheng, Xiu-Deng; Li, Cong; Lessard, Sabin; Tao, Yi
2018-05-01
In this Letter, we investigate stochastic stability in a two-phenotype evolutionary game model for an infinite, well-mixed population undergoing discrete, nonoverlapping generations. We assume that the fitness of a phenotype is an exponential function of its expected payoff following random pairwise interactions whose outcomes randomly fluctuate with time. We show that the stochastic local stability of a constant interior equilibrium can be promoted by the random environmental noise even if the system may display a complicated nonlinear dynamics. This result provides a new perspective for a better understanding of how environmental fluctuations may contribute to the evolution of behavioral diversity.
An Irreversible Constitutive Law for Modeling the Delamination Process using Interface Elements
NASA Technical Reports Server (NTRS)
Goyal, Vinay K.; Johnson, Eric R.; Davila, Carlos G.; Jaunky, Navin; Ambur, Damodar (Technical Monitor)
2002-01-01
An irreversible constitutive law is postulated for the formulation of interface elements to predict initiation and progression of delamination in composite structures. An exponential function is used for the constitutive law such that it satisfies a multi-axial stress criterion for the onset of delamination, and satisfies a mixed mode fracture criterion for the progression of delamination. A damage parameter is included to prevent the restoration of the previous cohesive state between the interfacial surfaces. To demonstrate the irreversibility capability of the constitutive law, steady-state crack growth is simulated for quasi-static loading-unloading cycle of various fracture test specimens.
An Irreversible Constitutive Law for Modeling the Delamination Process Using Interface Elements
NASA Technical Reports Server (NTRS)
Goyal, Vinay K.; Johnson, Eric R.; Davila, Carlos G.; Jaunky, Navin; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
An irreversible constitutive law is postulated for the formulation of interface elements to predict initiation and progression of delamination in composite structures. An exponential function is used for the constitutive law such that it satisfies a multi-axial stress criterion for the onset of delamination, and satisfies a mixed mode fracture criterion for the progression of delamination. A damage parameter is included to prevent the restoration of the previous cohesive state between the interfacial surfaces. To demonstrate the irreversibility capability of the constitutive law, steady-state crack growth is simulated for quasi-static loading-unloading cycle of various fracture test specimens.
NASA Technical Reports Server (NTRS)
Ovchinnikov, Mikhail; Ackerman, Andrew S.; Avramov, Alexander; Cheng, Anning; Fan, Jiwen; Fridlind, Ann M.; Ghan, Steven; Harrington, Jerry; Hoose, Corinna; Korolev, Alexei;
2014-01-01
Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP), in agreement with earlier studies. In contrast to previous intercomparison studies, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSDs) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case. Sensitivity tests indicate LWP and IWP are much closer to the bin model simulations when a modified shape factor which is similar to that predicted by bin model simulation is used in bulk scheme. These results demonstrate the importance of representation of ice PSD in determining the partitioning of liquid and ice and the longevity of mixed-phase clouds.
The effect of particle properties on the depth profile of buoyant plastics in the ocean
NASA Astrophysics Data System (ADS)
Kooi, Merel; Reisser, Julia; Slat, Boyan; Ferrari, Francesco F.; Schmid, Moritz S.; Cunsolo, Serena; Brambini, Roberto; Noble, Kimberly; Sirks, Lys-Anne; Linders, Theo E. W.; Schoeneich-Argent, Rosanna I.; Koelmans, Albert A.
2016-10-01
Most studies on buoyant microplastics in the marine environment rely on sea surface sampling. Consequently, microplastic amounts can be underestimated, as turbulence leads to vertical mixing. Models that correct for vertical mixing are based on limited data. In this study we report measurements of the depth profile of buoyant microplastics in the North Atlantic subtropical gyre, from 0 to 5 m depth. Microplastics were separated into size classes (0.5-1.5 and 1.5-5.0 mm) and types (‘fragments’ and ‘lines’), and associated with a sea state. Microplastic concentrations decreased exponentially with depth, with both sea state and particle properties affecting the steepness of the decrease. Concentrations approached zero within 5 m depth, indicating that most buoyant microplastics are present on or near the surface. Plastic rise velocities were also measured, and were found to differ significantly for different sizes and shapes. Our results suggest that (1) surface samplers such as manta trawls underestimate total buoyant microplastic amounts by a factor of 1.04-30.0 and (2) estimations of depth-integrated buoyant plastic concentrations should be done across different particle sizes and types. Our findings can assist with improving buoyant ocean plastic vertical mixing models, mass balance exercises, impact assessments and mitigation strategies.
The effect of particle properties on the depth profile of buoyant plastics in the ocean
Kooi, Merel; Reisser, Julia; Slat, Boyan; Ferrari, Francesco F.; Schmid, Moritz S.; Cunsolo, Serena; Brambini, Roberto; Noble, Kimberly; Sirks, Lys-Anne; Linders, Theo E. W.; Schoeneich-Argent, Rosanna I.; Koelmans, Albert A.
2016-01-01
Most studies on buoyant microplastics in the marine environment rely on sea surface sampling. Consequently, microplastic amounts can be underestimated, as turbulence leads to vertical mixing. Models that correct for vertical mixing are based on limited data. In this study we report measurements of the depth profile of buoyant microplastics in the North Atlantic subtropical gyre, from 0 to 5 m depth. Microplastics were separated into size classes (0.5–1.5 and 1.5–5.0 mm) and types (‘fragments’ and ‘lines’), and associated with a sea state. Microplastic concentrations decreased exponentially with depth, with both sea state and particle properties affecting the steepness of the decrease. Concentrations approached zero within 5 m depth, indicating that most buoyant microplastics are present on or near the surface. Plastic rise velocities were also measured, and were found to differ significantly for different sizes and shapes. Our results suggest that (1) surface samplers such as manta trawls underestimate total buoyant microplastic amounts by a factor of 1.04–30.0 and (2) estimations of depth-integrated buoyant plastic concentrations should be done across different particle sizes and types. Our findings can assist with improving buoyant ocean plastic vertical mixing models, mass balance exercises, impact assessments and mitigation strategies. PMID:27721460
The effect of particle properties on the depth profile of buoyant plastics in the ocean.
Kooi, Merel; Reisser, Julia; Slat, Boyan; Ferrari, Francesco F; Schmid, Moritz S; Cunsolo, Serena; Brambini, Roberto; Noble, Kimberly; Sirks, Lys-Anne; Linders, Theo E W; Schoeneich-Argent, Rosanna I; Koelmans, Albert A
2016-10-10
Most studies on buoyant microplastics in the marine environment rely on sea surface sampling. Consequently, microplastic amounts can be underestimated, as turbulence leads to vertical mixing. Models that correct for vertical mixing are based on limited data. In this study we report measurements of the depth profile of buoyant microplastics in the North Atlantic subtropical gyre, from 0 to 5 m depth. Microplastics were separated into size classes (0.5-1.5 and 1.5-5.0 mm) and types ('fragments' and 'lines'), and associated with a sea state. Microplastic concentrations decreased exponentially with depth, with both sea state and particle properties affecting the steepness of the decrease. Concentrations approached zero within 5 m depth, indicating that most buoyant microplastics are present on or near the surface. Plastic rise velocities were also measured, and were found to differ significantly for different sizes and shapes. Our results suggest that (1) surface samplers such as manta trawls underestimate total buoyant microplastic amounts by a factor of 1.04-30.0 and (2) estimations of depth-integrated buoyant plastic concentrations should be done across different particle sizes and types. Our findings can assist with improving buoyant ocean plastic vertical mixing models, mass balance exercises, impact assessments and mitigation strategies.
Analysis of risk factors in severity of rural truck crashes.
DOT National Transportation Integrated Search
2016-04-01
Trucks are a vital part of the logistics system in North Dakota. Recent energy developments have : generated exponential growth in the demand for truck services. With increased density of trucks in the : traffic mix, it is reasonable to expect some i...
Penalized nonparametric scalar-on-function regression via principal coordinates
Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu
2016-01-01
A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963
A numerical study of mixing in stationary, nonpremixed, turbulent reacting flows
NASA Astrophysics Data System (ADS)
Overholt, Matthew Ryan
1998-10-01
In this work a detailed numerical study is made of a statistically-stationary, non-premixed, turbulent reacting model flow known as Periodic Reaction Zones. The mixture fraction-progress variable approach is used, with a mean gradient in the mixture fraction and a model, single-step, reversible, finite-rate thermochemistry, yielding both stationary and local extinction behavior. The passive scalar is studied first, using a statistical forcing scheme to achieve stationarity of the velocity field. Multiple independent direct numerical simulations (DNS) are performed for a wide range of Reynolds numbers with a number of results including a bilinear model for scalar mixing jointly conditioned on the scalar and x2-component of velocity, Gaussian scalar probability density function tails which were anticipated to be exponential, and the quantification of the dissipation of scalar flux. A new deterministic forcing scheme for DNS is then developed which yields reduced fluctuations in many quantities and a more natural evolution of the velocity fields. This forcing method is used for the final portion of this work. DNS results for Periodic Reaction Zones are compared with the Conditional Moment Closure (CMC) model, the Quasi-Equilibrium Distributed Reaction (QEDR) model, and full probability density function (PDF) simulations using the Euclidean Minimum Spanning Tree (EMST) and the Interaction by Exchange with the Mean (IEM) mixing models. It is shown that CMC and QEDR results based on the local scalar dissipation match DNS wherever local extinction is not present. However, due to the large spatial variations of scalar dissipation, and hence local Damkohler number, local extinction is present even when the global Damkohler number is twenty-five times the critical value for extinction. Finally, in the PDF simulations the EMST mixing model closely reproduces CMC and DNS results when local extinction is not present, whereas the IEM model results in large error.
Kiskowski, Maria; Chowell, Gerardo
2016-01-01
The mechanisms behind the sub-exponential growth dynamics of the West Africa Ebola virus disease epidemic could be related to improved control of the epidemic and the result of reduced disease transmission in spatially constrained contact structures. An individual-based, stochastic network model is used to model immediate and delayed epidemic control in the context of social contact networks and investigate the extent to which the relative role of these factors may be determined during an outbreak. We find that in general, epidemics quickly establish a dynamic equilibrium of infections in the form of a wave of fixed size and speed traveling through the contact network. Both greater epidemic control and limited community mixing decrease the size of an infectious wave. However, for a fixed wave size, epidemic control (in contrast with limited community mixing) results in lower community saturation and a wave that moves more quickly through the contact network. We also found that the level of epidemic control has a disproportionately greater reductive effect on larger waves, so that a small wave requires nearly as much epidemic control as a larger wave to end an epidemic. PMID:26399855
Kiskowski, Maria; Chowell, Gerardo
2016-01-01
The mechanisms behind the sub-exponential growth dynamics of the West Africa Ebola virus disease epidemic could be related to improved control of the epidemic and the result of reduced disease transmission in spatially constrained contact structures. An individual-based, stochastic network model is used to model immediate and delayed epidemic control in the context of social contact networks and investigate the extent to which the relative role of these factors may be determined during an outbreak. We find that in general, epidemics quickly establish a dynamic equilibrium of infections in the form of a wave of fixed size and speed traveling through the contact network. Both greater epidemic control and limited community mixing decrease the size of an infectious wave. However, for a fixed wave size, epidemic control (in contrast with limited community mixing) results in lower community saturation and a wave that moves more quickly through the contact network. We also found that the level of epidemic control has a disproportionately greater reductive effect on larger waves, so that a small wave requires nearly as much epidemic control as a larger wave to end an epidemic.
NASA Astrophysics Data System (ADS)
Wang, Fei; Zhang, Yijun; Zheng, Dong; Xu, Liangtao; Zhang, Wenjuan; Meng, Qing
2017-10-01
A three-dimensional charge-discharge numerical model is used, in a semi-idealized mode, to simulate a thunder-storm cell. Characteristics of the graupel microphysics and vertical air motion associated with the lightning initiation are revealed, which could be useful in retrieving charge strength during lightning when no charge-discharge model is available. The results show that the vertical air motion at the lightning initiation sites ( W ini) has a cubic polynomial correlation with the maximum updraft of the storm cell ( W cell-max), with the adjusted regression coefficient R 2 of approximately 0.97. Meanwhile, the graupel mixing ratio at the lightning initiation sites ( q g-ini) has a linear correlation with the maximum graupel mixing ratio of the storm cell ( q g-cell-max) and the initiation height ( z ini), with the coefficients being 0.86 and 0.85, respectively. These linear correlations are more significant during the middle and late stages of lightning activity. A zero-charge zone, namely, the area with very low net charge density between the main positive and negative charge layers, appears above the area of q g-cell-max and below the upper edge of the graupel region, and is found to be an important area for lightning initiation. Inside the zero-charge zone, large electric intensity forms, and the ratio of q ice (ice crystal mixing ratio) to q g (graupel mixing ratio) illustrates an exponential relationship to q g-ini. These relationships provide valuable clues to more accurately locating the high-risk area of lightning initiation in thunderstorms when only dual-polarization radar data or outputs from numerical models without charging/discharging schemes are available. The results can also help understand the environmental conditions at lightning initiation sites.
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
Effect of water-based recovery on blood lactate removal after high-intensity exercise.
Lucertini, Francesco; Gervasi, Marco; D'Amen, Giancarlo; Sisti, Davide; Rocchi, Marco Bruno Luigi; Stocchi, Vilberto; Benelli, Piero
2017-01-01
This study assessed the effectiveness of water immersion to the shoulders in enhancing blood lactate removal during active and passive recovery after short-duration high-intensity exercise. Seventeen cyclists underwent active water- and land-based recoveries and passive water and land-based recoveries. The recovery conditions lasted 31 minutes each and started after the identification of each cyclist's blood lactate accumulation peak, induced by a 30-second all-out sprint on a cycle ergometer. Active recoveries were performed on a cycle ergometer at 70% of the oxygen consumption corresponding to the lactate threshold (the control for the intensity was oxygen consumption), while passive recoveries were performed with subjects at rest and seated on the cycle ergometer. Blood lactate concentration was measured 8 times during each recovery condition and lactate clearance was modeled over a negative exponential function using non-linear regression. Actual active recovery intensity was compared to the target intensity (one sample t-test) and passive recovery intensities were compared between environments (paired sample t-tests). Non-linear regression parameters (coefficients of the exponential decay of lactate; predicted resting lactates; predicted delta decreases in lactate) were compared between environments (linear mixed model analyses for repeated measures) separately for the active and passive recovery modes. Active recovery intensities did not differ significantly from the target oxygen consumption, whereas passive recovery resulted in a slightly lower oxygen consumption when performed while immersed in water rather than on land. The exponential decay of blood lactate was not significantly different in water- or land-based recoveries in either active or passive recovery conditions. In conclusion, water immersion at 29°C would not appear to be an effective practice for improving post-exercise lactate removal in either the active or passive recovery modes.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.
Capozziello, S; Lambiase, G; Saridakis, E N
2017-01-01
We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
NASA Astrophysics Data System (ADS)
Hu, Zhenhua; Gao, Shen; Xiang, Bowen
2016-01-01
An analytical expression of transient four-wave mixing (TFWM) in inverted semiconductor with carrier-injection pumping was derived from both the density matrix equation and the complex stochastic stationary statistical method of incoherent light. Numerical analysis showed that the TFWM decayed decay is towards the limit of extreme homogeneous and inhomogeneous broadenings in atoms and the decaying time is inversely proportional to half the power of the net carrier densities for a low carrier-density injection and other high carrier-density injection, while it obeys an usual exponential decay with other decaying time that is inversely proportional to half the power of the net carrier density or it obeys an unusual exponential decay with the decaying time that is inversely proportional to a third power of the net carrier density for a moderate carrier-density injection. The results can be applied to studying ultrafast carrier dephasing in the inverted semiconductors such as semiconductor laser amplifier and semiconductor optical amplifier.
Large deviations and mixing for dissipative PDEs with unbounded random kicks
NASA Astrophysics Data System (ADS)
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
A multigrid solver for the semiconductor equations
NASA Technical Reports Server (NTRS)
Bachmann, Bernhard
1993-01-01
We present a multigrid solver for the exponential fitting method. The solver is applied to the current continuity equations of semiconductor device simulation in two dimensions. The exponential fitting method is based on a mixed finite element discretization using the lowest-order Raviart-Thomas triangular element. This discretization method yields a good approximation of front layers and guarantees current conservation. The corresponding stiffness matrix is an M-matrix. 'Standard' multigrid solvers, however, cannot be applied to the resulting system, as this is dominated by an unsymmetric part, which is due to the presence of strong convection in part of the domain. To overcome this difficulty, we explore the connection between Raviart-Thomas mixed methods and the nonconforming Crouzeix-Raviart finite element discretization. In this way we can construct nonstandard prolongation and restriction operators using easily computable weighted L(exp 2)-projections based on suitable quadrature rules and the upwind effects of the discretization. The resulting multigrid algorithm shows very good results, even for real-world problems and for locally refined grids.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
Koh, D.-C.; Plummer, Niel; Kip, Solomon D.; Busenberg, E.; Kim, Y.-J.; Chang, H.-W.
2006-01-01
Tritium/helium-3 (3H/3He) and chlorofluorocarbons (CFCs) were investigated as environmental tracers in ground water from Jeju Island (Republic of Korea), a basaltic volcanic island. Ground-water mixing was evaluated by comparing 3H and CFC-12 concentrations with lumped-parameter dispersion models, which distinguished old water recharged before the 1950s with negligible 3H and CFC-12 from younger water. Low 3H levels in a considerable number of samples cannot be explained by the mixing models, and were interpreted as binary mixing of old and younger water; a process also identified in alkalinity and pH of ground water. The ground-water CFC-12 age is much older in water from wells completed in confined zones of the hydro-volcanic Seogwipo Formation in coastal areas than in water from the basaltic aquifer. Major cation concentrations are much higher in young water with high nitrate than those in uncontaminated old water. Chemical evolution of ground water resulting from silicate weathering in basaltic rocks reaches the zeolite-smectite phase boundary. The calcite saturation state of ground water increases with the CFC-12 apparent (piston flow) age. In agricultural areas, the temporal trend of nitrate concentration in ground water is consistent with the known history of chemical fertilizer use on the island, but increase of nitrate concentration in ground water is more abrupt after the late 1970s compared with the exponential growth of nitrogen inputs. ?? 2005 Elsevier B.V. All rights reserved.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
Biased phylodynamic inferences from analysing clusters of viral sequences
Xiang, Fei; Frost, Simon D. W.
2017-01-01
Abstract Phylogenetic methods are being increasingly used to help understand the transmission dynamics of measurably evolving viruses, including HIV. Clusters of highly similar sequences are often observed, which appear to follow a ‘power law’ behaviour, with a small number of very large clusters. These clusters may help to identify subpopulations in an epidemic, and inform where intervention strategies should be implemented. However, clustering of samples does not necessarily imply the presence of a subpopulation with high transmission rates, as groups of closely related viruses can also occur due to non-epidemiological effects such as over-sampling. It is important to ensure that observed phylogenetic clustering reflects true heterogeneity in the transmitting population, and is not being driven by non-epidemiological effects. We qualify the effect of using a falsely identified ‘transmission cluster’ of sequences to estimate phylodynamic parameters including the effective population size and exponential growth rate under several demographic scenarios. Our simulation studies show that taking the maximum size cluster to re-estimate parameters from trees simulated under a randomly mixing, constant population size coalescent process systematically underestimates the overall effective population size. In addition, the transmission cluster wrongly resembles an exponential or logistic growth model 99% of the time. We also illustrate the consequences of false clusters in exponentially growing coalescent and birth-death trees, where again, the growth rate is skewed upwards. This has clear implications for identifying clusters in large viral databases, where a false cluster could result in wasted intervention resources. PMID:28852573
Groundwater mixing dynamics at a Canadian Shield mine
NASA Astrophysics Data System (ADS)
Douglas, M.; Clark, I. D.; Raven, K.; Bottomley, D.
2000-08-01
Temporal and spatial variations in geochemistry and isotopes in mine inflows at the Con Mine, Yellowknife, are studied to access the impact of underground openings on deep groundwater flow in the Canadian Shield. Periodic sampling of inflow at 20 sites from 700 to 1615 m depth showed that salinities range from 1.4 to 290 g/l, with tritium detected at all depths. Three mixing end-members are identified: (1) Ca(Na)-Cl Shield brine; (2) glacial meltwater recharged at the margin of the retreating Laurentide ice sheet at ˜10 ka; and (3) modern meteoric water. Mixing fractions, calculated for inflows on five mine levels, illustrate the infiltration of modern water along specific fault planes. Tritium data for the modern component are corrected for mixing with brine and glacial waters and interpreted with an exponential-piston flow model. Results indicate that the mean transit time from surface to 1300 m depth is about 23 years in the early period after drift construction in 1979, but decreases to about 17 years in the past decade. The persistence of glacial meltwater in the subsurface to the present time, and the rapid circulation of modern meteoric water since the start of mining activities underline the importance of gradient, in addition to permeability, as a control on deep groundwater flow in the Canadian Shield.
NASA Astrophysics Data System (ADS)
Ferdows, M.; Liu, D.
2017-02-01
The aim of this work is to study the mixed convection boundary layer flow from a horizontal surface embedded in a porous medium with exponential decaying internal heat generation (IHG). Boundary layer equations are reduced to two ordinary differential equations for the dimensionless stream function and temperature with two parameters: ɛ, the mixed convection parameter, and λ, the exponent of x. This problem is numerically solved with a system of parameters using built-in codes in Maple. The influences of these parameters on velocity and temperature profiles, and the Nusselt number, are thoroughly compared and discussed.
Relun, Anne; Grosbois, Vladimir; Alexandrov, Tsviatko; Sánchez-Vizcaíno, Jose M; Waret-Szkuta, Agnes; Molia, Sophie; Etter, Eric Marcel Charles; Martínez-López, Beatriz
2017-01-01
In most European countries, data regarding movements of live animals are routinely collected and can greatly aid predictive epidemic modeling. However, the use of complete movements' dataset to conduct policy-relevant predictions has been so far limited by the massive amount of data that have to be processed (e.g., in intensive commercial systems) or the restricted availability of timely and updated records on animal movements (e.g., in areas where small-scale or extensive production is predominant). The aim of this study was to use exponential random graph models (ERGMs) to reproduce, understand, and predict pig trade networks in different European production systems. Three trade networks were built by aggregating movements of pig batches among premises (farms and trade operators) over 2011 in Bulgaria, Extremadura (Spain), and Côtes-d'Armor (France), where small-scale, extensive, and intensive pig production are predominant, respectively. Three ERGMs were fitted to each network with various demographic and geographic attributes of the nodes as well as six internal network configurations. Several statistical and graphical diagnostic methods were applied to assess the goodness of fit of the models. For all systems, both exogenous (attribute-based) and endogenous (network-based) processes appeared to govern the structure of pig trade network, and neither alone were capable of capturing all aspects of the network structure. Geographic mixing patterns strongly structured pig trade organization in the small-scale production system, whereas belonging to the same company or keeping pigs in the same housing system appeared to be key drivers of pig trade, in intensive and extensive production systems, respectively. Heterogeneous mixing between types of production also explained a part of network structure, whichever production system considered. Limited information is thus needed to capture most of the global structure of pig trade networks. Such findings will be useful to simplify trade networks analysis and better inform European policy makers on risk-based and more cost-effective prevention and control against swine diseases such as African swine fever, classical swine fever, or porcine reproductive and respiratory syndrome.
A Simulation To Model Exponential Growth.
ERIC Educational Resources Information Center
Appelbaum, Elizabeth Berman
2000-01-01
Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)
NASA Astrophysics Data System (ADS)
Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In
2017-08-01
In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
Investigation of Co-rotation Lag in Saturn's Dayside Magnetosphere and Comparison with the Nightside
NASA Astrophysics Data System (ADS)
Smith, E. J.; Dougherty, M. K.
2016-12-01
Two previous studies of co-rotation lag concentrated on 13 identical high-inclination Cassini orbits. In the first, measurements of the magnetospheric field azimuthal component, Bϕ, were restricted to the southern hemisphere, near midnight, from the equator and perikron to maximum latitude 70°. Comparison with the prevailing model of the magnetosphere-ionosphere interaction yielded conclusions that the ionospheric conductivity, Σp, was independent of ionospheric co-latitude, θi, and the ratio of magnetospheric to planetary field angular velocities, ω/Ωs, equaled, 1- exp(-Bθi), an unexpected exponential dependence on a single parameter. Both model parameters exhibited significant temporal variations from orbit to orbit leading to variations in the ionospheric profiles of Pedersen current, Ip. The second 13 orbit study of Bϕ extended to the north hemisphere where lagging fields alternated with leading and co-rotating fields. It was concluded that the difference was actually a local- time dependence with lagging -fields- only occurring after midnight and the mixed rotations before midnight. Again, Σp was independent of θi and ω/Ωs = 1- exp(-Bθi). Both studies raised the questions: How general is the exponential dependence of 1-ω/Ωs? Is it restricted to midnight or hold as well in the dayside magnetosphere? What is the cause of this dependence that differs from the model? The analysis of Bϕ has been extended to four nearly-identical north-south orbits near noon. The results and conclusions of this third study will be reported.
Chakraborty, Saikat; Singh, Prasun Kumar; Paramashetti, Pawan
2017-08-01
A novel microreactor-based energy-efficient process of using complete convective mixing in a macroreactor till an optimal mixing time followed by no mixing in 200-400μl microreactors enhances glucose and reducing sugar yields by upto 35% and 29%, respectively, while saving 72-90% of the energy incurred on reactor mixing in the enzymatic hydrolysis of cellulose. Empirical exponential relations are provided for determining the optimal mixing time, during which convective mixing in the macroreactor promotes mass transport of the cellulase enzyme to the solid Avicel substrate, while the latter phase of no mixing in the microreactor suppresses product inhibition by preventing the inhibitors (glucose and cellobiose) from homogenizing across the reactor. Sugar yield increases linearly with liquid to solid height ratio (r h ), irrespective of substrate loading and microreactor size, since large r h allows the inhibitors to diffuse in the liquid away from the solids, thus reducing product inhibition. Copyright © 2017 Elsevier Ltd. All rights reserved.
A charge carrier transport model for donor-acceptor blend layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, Janine, E-mail: janine.fischer@iapp.de; Widmer, Johannes; Koerner, Christian
2015-01-28
Highly efficient organic solar cells typically comprise donor-acceptor blend layers facilitating effective splitting of excitons. However, the charge carrier mobility in the blends can be substantially smaller than in neat materials, hampering the device performance. Currently, available mobility models do not describe the transport in blend layers entirely. Here, we investigate hole transport in a model blend system consisting of the small molecule donor zinc phthalocyanine (ZnPc) and the acceptor fullerene C{sub 60} in different mixing ratios. The blend layer is sandwiched between p-doped organic injection layers, which prevent minority charge carrier injection and enable exploiting diffusion currents for themore » characterization of exponential tail states from a thickness variation of the blend layer using numerical drift-diffusion simulations. Trap-assisted recombination must be considered to correctly model the conductivity behavior of the devices, which are influenced by local electron currents in the active layer, even though the active layer is sandwiched in between p-doped contacts. We find that the density of deep tail states is largest in the devices with 1:1 mixing ratio (E{sub t} = 0.14 eV, N{sub t} = 1.2 × 10{sup 18 }cm{sup −3}) directing towards lattice disorder as the transport limiting process. A combined field and charge carrier density dependent mobility model are developed for this blend layer.« less
NASA Astrophysics Data System (ADS)
Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian
2017-04-01
Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.
Self-charging of identical grains in the absence of an external field.
Yoshimatsu, R; Araújo, N A M; Wurm, G; Herrmann, H J; Shinbrot, T
2017-01-06
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Self-charging of identical grains in the absence of an external field
NASA Astrophysics Data System (ADS)
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Something from nothing: self-charging of identical grains
NASA Astrophysics Data System (ADS)
Shinbrot, Troy; Yoshimatsu, Ryuta; Nuno Araujo, Nuno; Wurm, Gerhard; Herrmann, Hans
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. I acknowledge support from NSF/DMR, award 1404792.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
State of charge modeling of lithium-ion batteries using dual exponential functions
NASA Astrophysics Data System (ADS)
Kuo, Ting-Jung; Lee, Kung-Yen; Huang, Chien-Kang; Chen, Jau-Horng; Chiu, Wei-Li; Huang, Chih-Fang; Wu, Shuen-De
2016-05-01
A mathematical model is developed by fitting the discharging curve of LiFePO4 batteries and used to investigate the relationship between the state of charge and the closed-circuit voltage. The proposed mathematical model consists of dual exponential terms and a constant term which can fit the characteristics of dual equivalent RC circuits closely, representing a LiFePO4 battery. One exponential term presents the stable discharging behavior and the other one presents the unstable discharging behavior and the constant term presents the cut-off voltage.
Self-charging of identical grains in the absence of an external field
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. PMID:28059124
Deng, Nan-jie; Dai, Wei
2013-01-01
Understanding how kinetics in the unfolded state affects protein folding is a fundamentally important yet less well-understood issue. Here we employ three different models to analyze the unfolded landscape and folding kinetics of the miniprotein Trp-cage. The first is a 208 μs explicit solvent molecular dynamics (MD) simulation from D. E. Shaw Research containing tens of folding events. The second is a Markov state model (MSM-MD) constructed from the same ultra-long MD simulation; MSM-MD can be used to generate thousands of folding events. The third is a Markov state model built from temperature replica exchange MD simulations in implicit solvent (MSM-REMD). All the models exhibit multiple folding pathways, and there is a good correspondence between the folding pathways from direct MD and those computed from the MSMs. The unfolded populations interconvert rapidly between extended and collapsed conformations on time scales ≤ 40 ns, compared with the folding time of ≈ 5 μs. The folding rates are independent of where the folding is initiated from within the unfolded ensemble. About 90 % of the unfolded states are sampled within the first 40 μs of the ultra-long MD trajectory, which on average explores ~27 % of the unfolded state ensemble between consecutive folding events. We clustered the folding pathways according to structural similarity into “tubes”, and kinetically partitioned the unfolded state into populations that fold along different tubes. From our analysis of the simulations and a simple kinetic model, we find that when the mixing within the unfolded state is comparable to or faster than folding, the folding waiting times for all the folding tubes are similar and the folding kinetics is essentially single exponential despite the presence of heterogeneous folding paths with non-uniform barriers. When the mixing is much slower than folding, different unfolded populations fold independently leading to non-exponential kinetics. A kinetic partition of the Trp-cage unfolded state is constructed which reveals that different unfolded populations have almost the same probability to fold along any of the multiple folding paths. We are investigating whether the results for the kinetics in the unfolded state of the twenty-residue Trp-cage is representative of larger single domain proteins. PMID:23705683
Emergence of power-law in a market with mixed models
NASA Astrophysics Data System (ADS)
Ali Saif, M.; Gade, Prashant M.
2007-10-01
We investigate the problem of wealth distribution from the viewpoint of asset exchange. Robust nature of Pareto's law across economies, ideologies and nations suggests that this could be an outcome of trading strategies. However, the simple asset exchange models fail to reproduce this feature. A Yardsale (YS) model in which amount put on the bet is a fraction of minimum of the two players leads to condensation of wealth in hands of some agent while theft and fraud (TF) model in which the amount to be exchanged is a fraction of loser's wealth leads to an exponential distribution of wealth. We show that if we allow few agents to follow a different model than others, i.e., there are some agents following TF model while rest follow YS model, it leads to distribution with power-law tails. Similar effect is observed when one carries out transactions for a fraction of one's wealth using TF model and for the rest YS model is used. We also observe a power-law tail in wealth distribution if we allow the agents to follow either of the models with some probability.
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
NASA Astrophysics Data System (ADS)
Khan, Imad; Fatima, Sumreen; Malik, M. Y.; Salahuddin, T.
2018-03-01
This paper explores the theoretical study of the steady incompressible two dimensional MHD boundary layer flow of Eyring-Powell nanofluid over an inclined surface. The fluid is considered to be electrically conducting and the viscosity of the fluid is assumed to be varying exponentially. The governing partial differential equations (PDE's) are reduced into ordinary differential equations (ODE's) by applying similarity approach. The resulting ordinary differential equations are solved successfully by using Homotopy analysis method. The impact of pertinent parameters on velocity, concentration and temperature profiles are examined through graphs and tables. Also coefficient of skin friction, Sherwood and Nusselt numbers are illustrated in tabular and graphical form.
Where "Old Heads" Prevail: Inmate Hierarchy in a Men's Prison Unit.
Kreager, Derek A; Young, Jacob T N; Haynie, Dana L; Bouchard, Martin; Schaefer, David R; Zajac, Gary
2017-01-01
Research of inmate social order is a once-vibrant area that receded just as American incarceration rates climbed and the country's carceral contexts dramatically changed. This study reengages inmate society with an abductive mixed methods investigation of informal status within a contemporary men's prison unit. The authors collect narrative and social network data from 133 male inmates housed in a unit of a Pennsylvania medium-security prison. Analyses of inmate narratives suggest that unit "old heads" provide collective goods in the form of mentoring and role modeling that foster a positive and stable peer environment. This hypothesis is then tested with Exponential Random Graph Models (ERGMs) of peer nomination data. The ERGM results complement the qualitative analysis and suggest that older inmates and those who have been on the unit longer are perceived by their peers as powerful and influential. Both analytical strategies point to the maturity of aging and the acquisition of local knowledge as important for attaining informal status in the unit. In sum, this mixed methods case study extends theoretical insights of classic prison ethnographies, adds quantifiable results capable of future replication, and points to a growing population of older inmates as important for contemporary prison social organization.
A Simulation of the ECSS Help Desk with the Erlang a Model
2011-03-01
a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.
Magnetic pattern at supergranulation scale: the void size distribution
NASA Astrophysics Data System (ADS)
Berrilli, F.; Scardigli, S.; Del Moro, D.
2014-08-01
The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Measurements of exciton diffusion by degenerate four-wave mixing in CdS1-xSex
NASA Astrophysics Data System (ADS)
Schwab, H.; Pantke, K.-H.; Hvam, J. M.; Klingshirn, C.
1992-09-01
We performed transient-grating experiments to study the diffusion of excitons in CdS1-xSex mixed crystals. The decay of the initially created exciton density grating is well described for t<=1 ns by a stretched-exponential function. For later times this decay changes over to a behavior that is well fitted by a simple exponential function. During resonant excitation of the localized states, we find the diffusion coefficient (D) to be considerably smaller than in the binary compounds CdSe and CdS. At 4.2 K, D is below our experimental resolution which is about 0.025 cm2/s. With increasing lattice temperature (Tlattice) the diffusion coefficient increases. It was therefore possible to prove, in a diffusion experiment, that at Tlattice<=5 K the excitons are localized, while the exciton-phonon interaction leads to a delocalization and thus to the onset of diffusion. It was possible to deduce the diffusion coefficient of the extended excitons as well as the energetic position of the mobility edge.
Transverse mixing of ellipsoidal particles in a rotating drum
NASA Astrophysics Data System (ADS)
He, Siyuan; Gan, Jieqing; Pinson, David; Zhou, Zongyan
2017-06-01
Rotating drums are widely used in industry for mixing, milling, coating and drying processes. In the past decades, mixing of granular materials in rotating drums has been extensively investigated, but most of the studies are based on spherical particles. Particle shape has an influence on the flow behaviour and thus mixing behaviour, though the shape effect has as-yet received limited study. In this work, discrete element method (DEM) is employed to study the transverse mixing of ellipsoidal particles in a rotating drum. The effects of aspect ratio and rotating speed on mixing quality and mixing rate are investigated. The results show that mixing index increases exponentially with time for both spheres and ellipsoids. Particles with various aspect ratios are able to reach well-mixed states after sufficient revolutions in the rolling or cascading regime. Ellipsoids show higher mixing rate when rotational speed is set between 25 and 40 rpm. The relationship between mixing rate and aspect ratio of ellipsoids is established, demonstrating that, particles with aspect ratios of 0.5 and 2.0 achieve the highest mixing rates. Increasing rotating speed from 15 rpm to 40 rpm does not necessarily increase the mixing speed of spheres, while monotonous increase is observed for ellipsoids.
The eigenmode perspective of NMR spin relaxation in proteins
NASA Astrophysics Data System (ADS)
Shapiro, Yury E.; Meirovitch, Eva
2013-12-01
We developed in recent years the two-body (protein and probe) coupled-rotator slowly relaxing local structure (SRLS) approach for elucidating protein dynamics from NMR spin relaxation. So far we used as descriptors the set of physical parameters that enter the SRLS model. They include the global (protein-related) diffusion tensor, D1, the local (probe-related) diffusion tensor, D2, and the local coupling/ordering potential, u. As common in analyzes based on mesoscopic dynamic models, these parameters have been determined with data-fitting techniques. In this study, we describe structural dynamics in terms of the eigenmodes comprising the SRLS time correlation functions (TCFs) generated by using the best-fit parameters as input to the Smoluchowski equation. An eigenmode is a weighted exponential with decay constant given by an eigenvalue of the Smoluchowski operator, and weighting factor determined by the corresponding eigenvector. Obviously, both quantities depend on the SRLS parameters as determined by the SRLS model. Unlike the set of best-fit parameters, the eigenmodes represent patterns of motion of the probe-protein system. The following new information is obtained for the typical probe, the 15N-1H bond. Two eigenmodes, associated with the protein and the probe, dominate when the time scale separation is large (i.e., D2 ≫ D1), the tensorial properties are simple, and the local potential is either very strong or very weak. When the potential exceeds these limits while the remaining conditions are preserved, new eigenmodes arise. The multi-exponentiality of the TCFs is associated in this case with the restricted nature of the local motion. When the time scale separation is no longer large, the rotational degrees of freedom of the protein and the probe become statistically dependent (coupled dynamically). The multi-exponentiality of the TCFs is associated in this case with the restricted nature of both the local and the global motion. The effects of local diffusion axiality, potential strength, and extent of mode-coupling on the eigenmode setup are investigated. We detect largely global motional or largely local motional eigenmodes. In addition, we detect mixed eigenmodes associated with correlated/prograde or anti-correlated/retrograde rotations of the global (D1) and local (D2) motional modes. The eigenmode paradigm is applied to N-H bond dynamics in the β-sheet residue K19, and the α-helix residue A34, of the third immunoglobulin-binding domain of streptococcal protein G. The largest contribution to the SRLS TCFs is made by mixed anti-correlated D1 and D2 eigenmodes. The next largest contribution is made by D1-dominated eigenmodes. Eigenmodes dominated by the local motion contribute appreciably to A34 and marginally to K19. Correlated D1 and D2 eigenmodes contribute exclusively to K19 and do not contribute above 1% to A34. The differences between K19 and A34 are delineated and rationalized in terms of the best-fit SRLS parameters and mode-mixing. It may be concluded that eigenmode analysis is complementary and supplementary to data-fitting-based analysis.
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Monreal, Carlos M; Chahal, Amarpreet; Rowland, Owen; Smith, Myron; Schnitzer, Morris
2014-01-01
Little is known about the fungal metabolism of nC10 and nC11 fatty acids and their conversion into lipids. A mixed batch culture of soil fungi, T. koningii and P. janthinellum, was grown on undecanoic acid (UDA), a mixture of UDA and potato dextrose broth (UDA+PDB), and PDB alone to examine their metabolic conversion during growth. We quantified seven intracellular and extracellular lipid classes using Iatroscan thin-layer chromatography with flame ionization detection (TLC-FID). Gas chromatography with flame ionization detection (GC-FID) was used to quantify 42 individual fatty acids. Per 150 mL culture, the mixed fungal culture grown on UDA+PDB produced the highest amount of intracellular (531 mg) and extracellular (14.7 mg) lipids during the exponential phase. The content of total intracellular lipids represented 25% of the total biomass-carbon, or 10% of the total biomass dry weight produced. Fatty acids made up the largest class of intracellular lipids (457 mg/150 mL culture) and they were synthesized at a rate of 2.4 mg/h during the exponential phase, and decomposed at a rate of 1.8 mg/h during the stationary phase, when UDA+PDB was the carbon source. Palmitic acid (C16:0), stearic acid (C18:0), oleic acid (C18:1), linoleic acid (C18:2) and vaccenic acid (C18:1) accounted for >80% of the total intracellular fatty acids. During exponential growth on UDA+PDB, hydrocarbons were the largest pool of all extracellular lipids (6.5 mg), and intracellularly they were synthesized at a rate of 64 μg/h. The mixed fungal species culture of T. koningii and P. janthinellum produced many lipids for potential use as industrial feedstocks or bioproducts in biorefineries.
NASA Astrophysics Data System (ADS)
Lin, C.; Accoroni, S.; Glibert, P. M.
2016-02-01
Mixotrophic grazing activity can be promoted in response to nutrient-enriched prey and this nutritional strategy is thought to be a factor in promoting growth of some toxic microalgae under nutrient limiting conditions for the mixotroph. However, it is unclear how the nutritional condition of the predator or the prey affects mixotrophic metabolism and, consequently, potential effects on the mixotroph that may, in turn, affect early life stages of bivalves. In laboratory experiments, we measured the grazing rate of the Karlodinium veneficum on Rhodomonas salina as prey, under varied nitrogen (N): phosphorus (P) stoichiometry of both predator and prey, and we compared the nutritionally-regulated effects of K. veneficum on larvae of the eastern oyster (Crassostrea virginia). Nutritionally sufficient, N-deficient, and P-deficient K. veneficum at two growth stages (exponential and stationary) were mixed with nutritionally sufficient, N-deficient, and P-deficient R. salina, in a factorial experimental design. Regardless of the nutritional condition of K. veneficum, it showed significantly higher grazing rates with N-rich prey in exponential stage and P-rich prey in stationary stage. Maximum grazing rates of N-deficient K. veneficum on N-rich prey in exponential stage were 20-fold larger than those nutritionally sufficient K. veneficum on N-rich prey. Significantly increased larval mortality was observed in 2-day exposures to monocultures of P-deficient K. veneficum at both stages. When mixed with P-deficient (or N-rich) prey, the presence of K. veneficum resulted in significantly enhanced larval mortality, but this was not the case for N-deficient K. veneficum in exponential stage. Mixotrophic feeding for K. veneficum may not only provide nutrition flexibility needed to persist bloom but appears to increase the negative effects of K. veneficum on the survival of oyster larvae.
Cournane, S; León Vintró, L; Mitchell, P I
2010-11-01
A microcosm laboratory experiment was conducted to determine the impact of biological reworking by the ragworm Nereis diversicolor on the redistribution of particle-bound radionuclides deposited at the sediment-water interface. Over the course of the 40-day experiment, as much as 35% of a (137)Cs-labelled particulate tracer deposited on the sediment surface was redistributed to depths of up to 11 cm by the polychaete. Three different reworking models were employed to model the profiles and quantify the biodiffusion and biotransport coefficients: a gallery-diffuser model, a continuous sub-surface egestion model and a biodiffusion model. Although the biodiffusion coefficients obtained for each model were quite similar, the continuous sub-surface egestion model provided the best fit to the data. The average biodiffusion coefficient, at 1.8 +/- 0.9 cm(2) y(-1), is in good agreement with the values quoted by other workers on the bioturbation effects of this polychaete species. The corresponding value for the biotransport coefficient was found to be 0.9 +/- 0.4 cm y(-1). The effects of non-local mixing were incorporated in a model to describe the temporal evolution of measured (99)Tc and (60)Co radionuclide sediment profiles in the eastern Irish Sea, influenced by radioactive waste discharged from the Sellafield reprocessing plant. Reworking conditions in the sediment column were simulated by considering an upper mixed layer, an exponentially decreasing diffusion coefficient, and appropriate biotransport coefficients to account for non-local mixing. The diffusion coefficients calculated from the (99)Tc and (60)Co cores were in the range 2-14 cm(2) y(-1), which are consistent with the values found by other workers in the same marine area, while the biotransport coefficients were similar to those obtained for a variety of macrobenthic organisms in controlled laboratories and field studies.
Mathematical Modeling of Extinction of Inhomogeneous Populations
Karev, G.P.; Kareva, I.
2016-01-01
Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117
NASA Astrophysics Data System (ADS)
Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo
2013-08-01
Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.
Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.
ERIC Educational Resources Information Center
Mandell, Marvin B.; Bretschneider, Stuart I.
1984-01-01
The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)
USDA-ARS?s Scientific Manuscript database
A new mechanistic growth model was developed to describe microbial growth under isothermal conditions. The new mathematical model was derived from the basic observation of bacterial growth that may include lag, exponential, and stationary phases. With this model, the lag phase duration and exponen...
Search for gamma-ray spectral modulations in Galactic pulsars
NASA Astrophysics Data System (ADS)
Majumdar, Jhilik; Calore, Francesca; Horns, Dieter
2018-04-01
Well-motivated extensions of the standard model predict ultra-light and fundamental pseudo-scalar particles (e.g., axions or axion-like particles: ALPs). Similarly to the Primakoff-effect for axions, ALPs can mix with photons and consequently be searched for in laboratory experiments and with astrophysical observations. Here, we search for energy-dependent modulations of high-energy gamma-ray spectra that are tell-tale signatures of photon-ALPs mixing. To this end, we analyze the data recorded with the Fermi-LAT from Galactic pulsars selected to have a line of sight crossing spiral arms at a large pitch angle. The large-scale Galactic magnetic field traces the shape of spiral arms, such that a sizable photon-ALP conversion probability is expected for the sources considered. For the nearby Vela pulsar, the energy spectrum is well described by a smooth model spectrum (a power-law with a sub-exponential cut-off) while for the six selected Galactic pulsars, a common fit of the ALPs parameters improves the goodness of fit in comparison to a smooth model spectrum with a significance of 4.6 σ. We determine the most-likely values for mass ma and coupling gaγγ to be ma=(3.6‑0.2 stat.+0.5 stat.± 0.2syst. ) neV and gaγγ=(2.3‑0.4stat.+0.3 stat.± 0.4syst.)× 10‑10 GeV‑1. In the error budget, we consider instrumental effects, scaling of the adopted Galactic magnetic field model (± 20 %), and uncertainties on the distance of individual sources. The best-fit parameters are by a factor of ≈ 3 larger than the current best limit on solar ALPs generation obtained with the CAST helioscope, although known modifications of the photon-ALP mixing in the high density solar environment could provide a plausible explanation for the apparent tension between the helioscope bound and the indication for photon-ALPs mixing reported here.
Chemical Continuous Time Random Walks
NASA Astrophysics Data System (ADS)
Aquino, T.; Dentz, M.
2017-12-01
Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.
Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-01-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-06-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.
Teaching the Verhulst Model: A Teaching Experiment in Covariational Reasoning and Exponential Growth
ERIC Educational Resources Information Center
Castillo-Garsow, Carlos
2010-01-01
Both Thompson and the duo of Confrey and Smith describe how students might be taught to build "ways of thinking" about exponential behavior by coordinating the covariation of two changing quantities, however, these authors build exponential behavior from different meanings of covariation. Confrey and Smith advocate beginning with discrete additive…
Review of "Going Exponential: Growing the Charter School Sector's Best"
ERIC Educational Resources Information Center
Garcia, David
2011-01-01
This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…
McKellar, Robin C
2008-01-15
Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P
Experimental evidence of chaotic mixing at pore scale in 3D porous media
NASA Astrophysics Data System (ADS)
Heyman, J.; Turuban, R.; Jimenez Martinez, J.; Lester, D. R.; Meheust, Y.; Le Borgne, T.
2017-12-01
Mixing of dissolved chemical species in porous media plays a central role in many natural and industrial processes, such as contaminant transport and degradation in soils, oxygen and nitrates delivery in river beds, clogging in geothermal systems, CO2 sequestration. In particular, incomplete mixing at the pore scale may strongly affect the spatio-temporal distribution of reaction rates in soils and rocks, questioning the validity of diffusion-reaction models at the Darcy scale. Recent theoretical [1] and numerical [2] studies of flow in idealized porous media have suggested that fluid mixing may be chaotic at pore scale, hence pointing to a whole new set of models for mixing and reaction in porous media. However, so far this remained to be confirmed experimentally. Here we present experimental evidence of the chaotic nature of transverse mixing at the pore scale in three-dimensional porous media. We designed a novel experimental setup allowing high resolution pore scale imaging of the structure of a tracer plume in porous media columns consisting of 7, 10 and 20 mm glass bead packings. We conjointly used refractive index matching techniques, laser induced fluorescence and a moving laser-sheet to reconstruct the shape of a steady tracer plume as it gets deformed by the porous media flow. In this talk, we focus on the transverse behavior of mixing, that is, on the plane orthogonal to the main flow direction, in the limit of high Péclet numbers (diffusion is negligible). Moving away from the injection point, the plume cross-section turns quickly into complex, interlaced, lamellar structures. These structures elongated at an exponential rate, characteristic of a chaotic system, that can be characterized by an average Lyapunov exponent. We finally discuss the origin of this chaotic behavior and its most significant consequences for upscaling mixing and reactive transport in porous media. Reference:[1] D. R. Lester, G. Metcafle, M. G. Trefry, Physical Review Letters, 111, 174101 (2013) [2] R. Turuban, D. R. Lester, T. Le Borgne, and Y. Méheust (2017), under review.
NASA Astrophysics Data System (ADS)
Lengline, O.; Marsan, D.; Got, J.; Pinel, V.
2007-12-01
The evolution of the seismicity at three basaltic volcanoes (Kilauea, Mauna-Loa and Piton de la Fournaise) is analysed during phases of magma accumulation. We show that the VT seismicity during these time-periods is characterized by an exponential increase at long-time scale (years). Such an exponential acceleration can be explained by a model of seismicity forced by the replenishment of a magmatic reservoir. The increase in stress in the edifice caused by this replenishment is modeled. This stress history leads to a cumulative number of damage, ie VT earthquakes, following the same exponential increase as found for seismicity. A long-term seismicity precursor is thus detected at basaltic volcanoes. Although this precursory signal is not able to predict the onset times of futures eruptions (as no diverging point is present in the model), it may help mitigating volcanic hazards.
Multiserver Queueing Model subject to Single Exponential Vacation
NASA Astrophysics Data System (ADS)
Vijayashree, K. V.; Janani, B.
2018-04-01
A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.
Vadeby, Anna; Forsman, Åsa
2017-06-01
This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Forecasting electricity usage using univariate time series models
NASA Astrophysics Data System (ADS)
Hock-Eam, Lim; Chee-Yin, Yip
2014-12-01
Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.
Azim, M Ekram; Kumarappah, Ananthavalli; Bhavsar, Satyendra P; Backus, Sean M; Arhonditsis, George
2011-03-15
The temporal trends of total mercury (THg) in four fish species in Lake Erie were evaluated based on 35 years of fish contaminant data. Our Bayesian statistical approach consists of three steps aiming to address different questions. First, we used the exponential and mixed-order decay models to assess the declining rates in four intensively sampled fish species, i.e., walleye (Stizostedion vitreum), yellow perch (Perca flavescens), smallmouth bass (Micropterus dolomieui), and white bass (Morone chrysops). Because the two models postulate monotonic decrease of the THg levels, we included first- and second-order random walk terms in our statistical formulations to accommodate nonmonotonic patterns in the data time series. Our analysis identified a recent increase in the THg concentrations, particularly after the mid-1990s. In the second step, we used double exponential models to quantify the relative magnitude of the THg trends depending on the type of data used (skinless-boneless fillet versus whole fish data) and the fish species examined. The observed THg concentrations were significantly higher in skinless boneless fillet than in whole fish portions, while the whole fish portions of walleye exhibited faster decline rates and slower rates of increase relative to the skinless boneless fillet data. Our analysis also shows lower decline rates and higher rates of increase in walleye relative to the other three fish species examined. The food web structural shifts induced by the invasive species (dreissenid mussels and round goby) may be associated with the recent THg trends in Lake Erie fish.
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2016-01-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width. We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates, which have bounded hierarchy width—regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers. PMID:27279724
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
Comparison of kinetic model for biogas production from corn cob
NASA Astrophysics Data System (ADS)
Shitophyta, L. M.; Maryudi
2018-04-01
Energy demand increases every day, while the energy source especially fossil energy depletes increasingly. One of the solutions to overcome the energy depletion is to provide renewable energies such as biogas. Biogas can be generated by corn cob and food waste. In this study, biogas production was carried out by solid-state anaerobic digestion. The steps of biogas production were the preparation of feedstock, the solid-state anaerobic digestion, and the measurement of biogas volume. This study was conducted on TS content of 20%, 22%, and 24%. The aim of this research was to compare kinetic models of biogas production from corn cob and food waste as a co-digestion using the linear, exponential equation, and first-kinetic models. The result showed that the exponential equation had a better correlation than the linear equation on the ascending graph of biogas production. On the contrary, the linear equation had a better correlation than the exponential equation on the descending graph of biogas production. The correlation values on the first-kinetic model had the smallest value compared to the linear and exponential models.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
NASA Astrophysics Data System (ADS)
Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye
2016-10-01
With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement
Gustman, Alan L.; Steinmeier, Thomas L.
2012-01-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.
Gustman, Alan L; Steinmeier, Thomas L
2012-06-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.
NASA Astrophysics Data System (ADS)
Ernazarov, K. K.
2017-12-01
We consider a (m + 2)-dimensional Einstein-Gauss-Bonnet (EGB) model with the cosmological Λ-term. We restrict the metrics to be diagonal ones and find for certain Λ = Λ(m) class of cosmological solutions with non-exponential time dependence of two scale factors of dimensions m > 2 and 1. Any solution from this class describes an accelerated expansion of m-dimensional subspace and tends asymptotically to isotropic solution with exponential dependence of scale factors.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
High pressure liquid chromatographic gradient mixer
Daughton, Christian G.; Sakaji, Richard H.
1985-01-01
A gradient mixer which effects the continuous mixing of any two miscible solvents without excessive decay or dispersion of the resultant isocratic effluent or of a linear or exponential gradient. The two solvents are fed under low or high pressure by means of two high performance liquid chromatographic pumps. The mixer comprises a series of ultra-low dead volume stainless steel tubes and low dead volume chambers. The two solvent streams impinge head-on at high fluxes. This initial nonhomogeneous mixture is then passed through a chamber packed with spirally-wound wires which cause turbulent mixing thereby homogenizing the mixture with minimum "band-broadening".
High-pressure liquid chromatographic gradient mixer
Daughton, C.G.; Sakaji, R.H.
1982-09-08
A gradient mixer effects the continuous mixing of any two miscible solvents without excessive decay or dispersion of the resultant isocratic effluent or of a linear or exponential gradient. The two solvents are fed under low or high pressure by means of two high performance liquid chromatographic pumps. The mixer comprises a series of ultra-low dead volume stainless steel tubes and low dead volume chambers. The two solvent streams impinge head-on at high fluxes. This initial nonhomogeneous mixture is then passed through a chamber packed with spirally-wound wires which cause turbulent mixing thereby homogenizing the mixture with minimum band-broadening.
The "sweet science" of reducing periorbital lacerations in mixed martial arts.
Bastidas, Nicholas; Levine, Jamie P; Stile, Frank L
2012-01-01
The popularity of mixed martial arts competitions and televised events has grown exponentially since its inception, and with the growth of the sport, unique facial injury patterns have surfaced. In particular, upper eyelid and brow lacerations are common and are especially troublesome given the effect of hemorrhage from these areas on the fighter's vision and thus ability to continue. We propose that the convexity of the underlying supraorbital rim is responsible for the high frequency of lacerations in this region after blunt trauma and offer a method of reducing subsequent injury by reducing its prominence.
Abusam, A; Keesman, K J
2009-01-01
The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Exponential quantum spreading in a class of kicked rotor systems near high-order resonances
NASA Astrophysics Data System (ADS)
Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin
2013-11-01
Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.
A decades-long fast-rise-exponential-decay flare in low-luminosity AGN NGC 7213
NASA Astrophysics Data System (ADS)
Yan, Zhen; Xie, Fu-Guo
2018-03-01
We analysed the four-decades-long X-ray light curve of the low-luminosity active galactic nucleus (LLAGN) NGC 7213 and discovered a fast-rise-exponential-decay (FRED) pattern, i.e. the X-ray luminosity increased by a factor of ≈4 within 200 d, and then decreased exponentially with an e-folding time ≈8116 d (≈22.2 yr). For the theoretical understanding of the observations, we examined three variability models proposed in the literature: the thermal-viscous disc instability model, the radiation pressure instability model, and the TDE model. We find that a delayed tidal disruption of a main-sequence star is most favourable; either the thermal-viscous disc instability model or radiation pressure instability model fails to explain some key properties observed, thus we argue them unlikely.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Saksala, Timo
2016-10-01
This paper deals with numerical modelling of rock fracture under dynamic loading. For this end, a combined continuum damage-embedded discontinuity model is applied in finite element modelling of crack propagation in rock. In this model, the strong loading rate sensitivity of rock is captured by the rate-dependent continuum scalar damage model that controls the pre-peak nonlinear hardening part of rock behaviour. The post-peak exponential softening part of the rock behaviour is governed by the embedded displacement discontinuity model describing the mode I, mode II and mixed mode fracture of rock. Rock heterogeneity is incorporated in the present approach by random description of the rock mineral texture based on the Voronoi tessellation. The model performance is demonstrated in numerical examples where the uniaxial tension and compression tests on rock are simulated. Finally, the dynamic three-point bending test of a semicircular disc is simulated in order to show that the model correctly predicts the strain rate-dependent tensile strengths as well as the failure modes of rock in this test. Special emphasis is laid on modelling the loading rate sensitivity of tensile strength of Laurentian granite.
Stretching of passive tracers and implications for mantle mixing
NASA Astrophysics Data System (ADS)
Conjeepuram, N.; Kellogg, L. H.
2007-12-01
Mid ocean ridge basalts(MORB) and ocean island basalts(OIB) have fundamentally different geochemical signatures. Understanding this difference requires a fundamental knowledge of the mixing processes that led to their formation. Quantitative methods used to assess mixing include examining the distribution of passive tracers, attaching time-evolution information to simulate decay of radioactive isotopes, and, for chaotic flows, calculating the Lyapunov exponent, which characterizes whether two nearby particles diverge at an exponential rate. Although effective, these methods are indirect measures of the two fundamental processes associated with mixing namely, stretching and folding. Building on work done by Kellogg and Turcotte, we present a method to compute the stretching and thinning of a passive, ellipsoidal tracer in three orthogonal directions in isoviscous, incompressible three dimensional flows. We also compute the Lyapunov exponents associated with the given system based on the quantitative measures of stretching and thinning. We test our method with two analytical and three numerical flow fields which exhibit Lagrangian turbulence. The ABC and STF class of analytical flows are a three and two parameter class of flows respectively and have been well studied for fast dynamo action. Since they generate both periodic and chaotic particle paths depending either on the starting point or on the choice of the parameters, they provide a good foundation to understand mixing. The numerical flow fields are similar to the geometries used by Ferrachat and Ricard (1998) and emulate a ridge - transform system. We also compute the stable and unstable manifolds associated with the numerical flow fields to illustrate the directions of rapid and slow mixing. We find that stretching in chaotic flow fields is significantly more effective than regular or periodic flow fields. Consequently, chaotic mixing is far more efficient than regular mixing. We also find that in the numerical flow field, there is a fundamental topological difference in the regions exhibiting slow or regular mixing for different model geometries.
A method for manufacturing superior set yogurt under reduced oxygen conditions.
Horiuchi, H; Inoue, N; Liu, E; Fukui, M; Sasaki, Y; Sasaki, T
2009-09-01
The yogurt starters Lactobacillus delbrueckii ssp. bulgaricus and Streptococcus thermophilus are well-known facultatively anaerobic bacteria that can grow in oxygenated environments. We found that they removed dissolved oxygen (DO) in a yogurt mix as the fermentation progressed and that they began to produce acid actively after the DO concentration in the yogurt mix was reduced to 0 mg/kg, suggesting that the DO retarded the production of acid. Yogurt fermentation was carried out at 43 or 37 degrees C both after the DO reduction treatment and without prior treatment. Nitrogen gas was mixed and dispersed into the yogurt mix after inoculation with yogurt starter culture to reduce the DO concentration in the yogurt mix. The treatment that reduced DO concentration in the yogurt mix to approximately 0 mg/kg beforehand caused the starter culture LB81 used in this study to enter into the exponential growth phase earlier. Furthermore, the combination of reduced DO concentration in the yogurt mix beforehand and incubation at a lower temperature (37 degrees C) resulted in a superior set yogurt with a smooth texture and strong curd structure.
CMB constraints on β-exponential inflationary models
NASA Astrophysics Data System (ADS)
Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.
2018-03-01
We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Rathbun, R.E.; Tai, D.Y.
1988-01-01
The two-film model is often used to describe the volatilization of organic substances from water. This model assumes uniformly mixed water and air phases separated by thin films of water and air in which mass transfer is by molecular diffusion. Mass-transfer coefficients for the films, commonly called film coefficients, are related through the Henry's law constant and the model equation to the overall mass-transfer coefficient for volatilization. The films are modeled as two resistances in series, resulting in additive resistances. The two-film model and the concept of additivity of resistances were applied to experimental data for acetone and t-butyl alcohol. Overall mass-transfer coefficients for the volatilization of acetone and t-butyl alcohol from water were measured in the laboratory in a stirred constant-temperature bath. Measurements were completed for six water temperatures, each at three water mixing conditions. Wind-speed was constant at about 0.1 meter per second for all experiments. Oxygen absorption coefficients were measured simultaneously with the measurement of the acetone and t-butyl alcohol mass-transfer coefficients. Gas-film coefficients for acetone, t-butyl alcohol, and water were determined by measuring the volatilization fluxes of the pure substances over a range of temperatures. Henry's law constants were estimated from data from the literature. The combination of high resistance in the gas film for solutes with low values of the Henry's law constants has not been studied previously. Calculation of the liquid-film coefficients for acetone and t-butyl alcohol from measured overall mass-transfer and gas-film coefficients, estimated Henry's law constants, and the two-film model equation resulted in physically unrealistic, negative liquid-film coefficients for most of the experiments at the medium and high water mixing conditions. An analysis of the two-film model equation showed that when the percentage resistance in the gas film is large and the gas-film resistance approaches the overall resistance in value, the calculated liquid-film coefficient becomes extremely sensitive to errors in the Henry's law constant. The negative coefficients were attributed to this sensitivity and to errors in the estimated Henry's law constants. Liquid-film coefficients for the absorption of oxygen were correlated with the stirrer Reynolds number and the Schmidt number. Application of this correlation with the experimental conditions and a molecular-diffusion coefficient adjustment resulted in values of the liquid-film coefficients for both acetone and t-butyl alcohol within the range expected for all three mixing conditions. Comparison of Henry's law constants calculated from these film coefficients and the experimental data with the constants calculated from literature data showed that the differences were small relative to the errors reported in the literature as typical for the measurement or estimation of Henry's law constants for hydrophilic compounds such as ketones and alcohols. Temperature dependence of the mass-transfer coefficients was expressed in two forms. The first, based on thermodynamics, assumed the coefficients varied as the exponential of the reciprocal absolute temperature. The second empirical approach assumed the coefficients varied as the exponential of the absolute temperature. Both of these forms predicted the temperature dependence of the experimental mass-transfer coefficients with little error for most of the water temperature range likely to be found in streams and rivers. Liquid-film and gas-film coefficients for acetone and t-butyl alcohol were similar in value. However, depending on water mixing conditions, overall mass-transfer coefficients for acetone were from two to four times larger than the coefficients for t-butyl alcohol. This difference in behavior of the coefficients resulted because the Henry's law constant for acetone was about three times larger than that of
Individual and group dynamics in purchasing activity
NASA Astrophysics Data System (ADS)
Gao, Lei; Guo, Jin-Li; Fan, Chao; Liu, Xue-Jiao
2013-01-01
As a major part of the daily operation in an enterprise, purchasing frequency is in constant change. Recent approaches on the human dynamics can provide some new insights into the economic behavior of companies in the supply chain. This paper captures the attributes of creation times of purchase orders to an individual vendor, as well as to all vendors, and further investigates whether they have some kind of dynamics by applying logarithmic binning to the construction of distribution plots. It’s found that the former displays a power-law distribution with approximate exponent 2.0, while the latter is fitted by a mixture distribution with both power-law and exponential characteristics. Obviously, two distinctive characteristics are presented for the interval time distribution from the perspective of individual dynamics and group dynamics. Actually, this mixing feature can be attributed to the fitting deviations as they are negligible for individual dynamics, but those of different vendors are cumulated and then lead to an exponential factor for group dynamics. To better describe the mechanism generating the heterogeneity of the purchase order assignment process from the objective company to all its vendors, a model driven by product life cycle is introduced, and then the analytical distribution and the simulation result are obtained, which are in good agreement with the empirical data.
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
NASA Astrophysics Data System (ADS)
Ramirez, N.; Afshari, Afshin; Norford, L.
2018-07-01
A steady-state Reynolds-averaged Navier-Stoke computational fluid dynamics (CFD) investigation of boundary-layer flow over a major portion of downtown Abu Dhabi is conducted. The results are used to derive the shear stress and characterize the logarithmic region for eight sub-domains, where the sub-domains overlap and are overlaid in the streamwise direction. They are characterized by a high frontal area index initially, which decreases significantly beyond the fifth sub-domain. The plan area index is relatively stable throughout the domain. For each sub-domain, the estimated local roughness length and displacement height derived from CFD results are compared to prevalent empirical formulations. We further validate and tune a mixing-length model proposed by Coceal and Belcher (Q J R Meteorol Soc 130:1349-1372, 2004). Finally, the in-canopy wind-speed attenuation is analysed as a function of fetch. It is shown that, while there is some room for improvement in Macdonald's empirical formulations (Boundary-Layer Meteorol 97:25-45, 2000), Coceal and Belcher's mixing model in combination with the resolution method of Di Sabatino et al. (Boundary-Layer Meteorol 127:131-151, 2008) can provide a robust estimation of the average wind speed in the logarithmic region. Within the roughness sublayer, a properly parametrized Cionco exponential model is shown to be quite accurate.
Coronal loop seismology using damping of standing kink oscillations by mode coupling
NASA Astrophysics Data System (ADS)
Pascoe, D. J.; Goddard, C. R.; Nisticò, G.; Anfinogentov, S.; Nakariakov, V. M.
2016-05-01
Context. Kink oscillations of solar coronal loops are frequently observed to be strongly damped. The damping can be explained by mode coupling on the condition that loops have a finite inhomogeneous layer between the higher density core and lower density background. The damping rate depends on the loop density contrast ratio and inhomogeneous layer width. Aims: The theoretical description for mode coupling of kink waves has been extended to include the initial Gaussian damping regime in addition to the exponential asymptotic state. Observation of these damping regimes would provide information about the structuring of the coronal loop and so provide a seismological tool. Methods: We consider three examples of standing kink oscillations observed by the Atmospheric Imaging Assembly (AIA) of the Solar Dynamics Observatory (SDO) for which the general damping profile (Gaussian and exponential regimes) can be fitted. Determining the Gaussian and exponential damping times allows us to perform seismological inversions for the loop density contrast ratio and the inhomogeneous layer width normalised to the loop radius. The layer width and loop minor radius are found separately by comparing the observed loop intensity profile with forward modelling based on our seismological results. Results: The seismological method which allows the density contrast ratio and inhomogeneous layer width to be simultaneously determined from the kink mode damping profile has been applied to observational data for the first time. This allows the internal and external Alfvén speeds to be calculated, and estimates for the magnetic field strength can be dramatically improved using the given plasma density. Conclusions: The kink mode damping rate can be used as a powerful diagnostic tool to determine the coronal loop density profile. This information can be used for further calculations such as the magnetic field strength or phase mixing rate.
Where “Old Heads” Prevail: Inmate Hierarchy in a Men’s Prison Unit*
Kreager, Derek A.; Young, Jacob T.N.; Haynie, Dana L.; Bouchard, Martin; Schaefer, David R.; Zajac, Gary
2017-01-01
Research of inmate social order is a once-vibrant area that receded just as American incarceration rates climbed and the country’s carceral contexts dramatically changed. This study reengages inmate society with an abductive mixed methods investigation of informal status within a contemporary men’s prison unit. The authors collect narrative and social network data from 133 male inmates housed in a unit of a Pennsylvania medium-security prison. Analyses of inmate narratives suggest that unit “old heads” provide collective goods in the form of mentoring and role modeling that foster a positive and stable peer environment. This hypothesis is then tested with Exponential Random Graph Models (ERGMs) of peer nomination data. The ERGM results complement the qualitative analysis and suggest that older inmates and those who have been on the unit longer are perceived by their peers as powerful and influential. Both analytical strategies point to the maturity of aging and the acquisition of local knowledge as important for attaining informal status in the unit. In sum, this mixed methods case study extends theoretical insights of classic prison ethnographies, adds quantifiable results capable of future replication, and points to a growing population of older inmates as important for contemporary prison social organization. PMID:29540904
NASA Astrophysics Data System (ADS)
Ganguly, S.; Lubetzky, E.; Martinelli, F.
2015-05-01
The East process is a 1 d kinetically constrained interacting particle system, introduced in the physics literature in the early 1990s to model liquid-glass transitions. Spectral gap estimates of Aldous and Diaconis in 2002 imply that its mixing time on L sites has order L. We complement that result and show cutoff with an -window. The main ingredient is an analysis of the front of the process (its rightmost zero in the setup where zeros facilitate updates to their right). One expects the front to advance as a biased random walk, whose normal fluctuations would imply cutoff with an -window. The law of the process behind the front plays a crucial role: Blondel showed that it converges to an invariant measure ν, on which very little is known. Here we obtain quantitative bounds on the speed of convergence to ν, finding that it is exponentially fast. We then derive that the increments of the front behave as a stationary mixing sequence of random variables, and a Stein-method based argument of Bolthausen (`82) implies a CLT for the location of the front, yielding the cutoff result. Finally, we supplement these results by a study of analogous kinetically constrained models on trees, again establishing cutoff, yet this time with an O(1)-window.
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D
2014-01-01
To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.
Is a matrix exponential specification suitable for the modeling of spatial correlation structures?
Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha
2018-01-01
This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
de Andrade, R., Jr.; Lanfredi, A. J. C.; Ortiz, W. A.; Leite, E. R.
1997-08-01
The irreversibility line (IL) of a magnetically grain-aligned HgBa2CaCu2O6+δ (Hg-1212) sample was determined from magnetization measurements, with the magnetic fieldH parallel to the samplec-axis. The grain-aligned sample was made by mixing powdered polycrystalline samples with epoxy resin, cured under 94 KOe at room temperature. For fields below 10 kOe the Il is well fitted by a model of flux line lattice melting due to thermal fluctuations. For higher fields the IL behavior changes to an exponential growth of Hirr with 1/T. This change is related to a corresponding alteration in the character of the vortex fluctuations leading to the melting of the flux line lattice.
NASA Astrophysics Data System (ADS)
Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang
2016-07-01
Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.
Kumar, M Praveen; Patil, Suneel G; Dheeraj, Bhandari; Reddy, Keshav; Goel, Dinker; Krishna, Gopi
2015-06-01
The difficulty in obtaining an acceptable impression increases exponentially as the number of abutments increases. Accuracy of the impression material and the use of a suitable impression technique are of utmost importance in the fabrication of a fixed partial denture. This study compared the accuracy of the matrix impression system with conventional putty reline and multiple mix technique for individual dies by comparing the inter-abutment distance in the casts obtained from the impressions. Three groups, 10 impressions each with three impression techniques (matrix impression system, putty reline technique and multiple mix technique) were made of a master die. Typodont teeth were embedded in a maxillary frasaco model base. The left first premolar was removed to create a three-unit fixed partial denture situation and the left canine and second premolar were prepared conservatively, and hatch marks were made on the abutment teeth. The final casts obtained from the impressions were examined under a profile projector and the inter-abutment distance was calculated for all the casts and compared. The results from this study showed that in the mesiodistal dimensions the percentage deviation from master model in Group I was 0.1 and 0.2, in Group II was 0.9 and 0.3, and Group III was 1.6 and 1.5, respectively. In the labio-palatal dimensions the percentage deviation from master model in Group I was 0.01 and 0.4, Group II was 1.9 and 1.3, and Group III was 2.2 and 2.0, respectively. In the cervico-incisal dimensions the percentage deviation from the master model in Group I was 1.1 and 0.2, Group II was 3.9 and 1.7, and Group III was 1.9 and 3.0, respectively. In the inter-abutment dimension of dies, percentage deviation from master model in Group I was 0.1, Group II was 0.6, and Group III was 1.0. The matrix impression system showed more accuracy of reproduction for individual dies when compared with putty reline technique and multiple mix technique in all the three directions, as well as the inter-abutment distance.
Blomquist, Patrick; Devor, Anna; Indahl, Ulf G.; Ulbert, Istvan; Einevoll, Gaute T.; Dale, Anders M.
2009-01-01
A new method is presented for extraction of population firing-rate models for both thalamocortical and intracortical signal transfer based on stimulus-evoked data from simultaneous thalamic single-electrode and cortical recordings using linear (laminar) multielectrodes in the rat barrel system. Time-dependent population firing rates for granular (layer 4), supragranular (layer 2/3), and infragranular (layer 5) populations in a barrel column and the thalamic population in the homologous barreloid are extracted from the high-frequency portion (multi-unit activity; MUA) of the recorded extracellular signals. These extracted firing rates are in turn used to identify population firing-rate models formulated as integral equations with exponentially decaying coupling kernels, allowing for straightforward transformation to the more common firing-rate formulation in terms of differential equations. Optimal model structures and model parameters are identified by minimizing the deviation between model firing rates and the experimentally extracted population firing rates. For the thalamocortical transfer, the experimental data favor a model with fast feedforward excitation from thalamus to the layer-4 laminar population combined with a slower inhibitory process due to feedforward and/or recurrent connections and mixed linear-parabolic activation functions. The extracted firing rates of the various cortical laminar populations are found to exhibit strong temporal correlations for the present experimental paradigm, and simple feedforward population firing-rate models combined with linear or mixed linear-parabolic activation function are found to provide excellent fits to the data. The identified thalamocortical and intracortical network models are thus found to be qualitatively very different. While the thalamocortical circuit is optimally stimulated by rapid changes in the thalamic firing rate, the intracortical circuits are low-pass and respond most strongly to slowly varying inputs from the cortical layer-4 population. PMID:19325875
NASA Astrophysics Data System (ADS)
Luan, Tian; Guo, Xueliang; Guo, Lijun; Zhang, Tianhang
2018-01-01
Air quality and visibility are strongly influenced by aerosol loading, which is driven by meteorological conditions. The quantification of their relationships is critical to understanding the physical and chemical processes and forecasting of the polluted events. We investigated and quantified the relationship between PM2.5 (particulate matter with aerodynamic diameter is 2.5 µm and less) mass concentration, visibility and planetary boundary layer (PBL) height in this study based on the data obtained from four long-lasting haze events and seven fog-haze mixed events from January 2014 to March 2015 in Beijing. The statistical results show that there was a negative exponential function between the visibility and the PM2.5 mass concentration for both haze and fog-haze mixed events (with the same R2 of 0.80). However, the fog-haze events caused a more obvious decrease of visibility than that for haze events due to the formation of fog droplets that could induce higher light extinction. The PM2.5 concentration had an inversely linear correlation with PBL height for haze events and a negative exponential correlation for fog-haze mixed events, indicating that the PM2.5 concentration is more sensitive to PBL height in fog-haze mixed events. The visibility had positively linear correlation with the PBL height with an R2 of 0.35 in haze events and positive exponential correlation with an R2 of 0.56 in fog-haze mixed events. We also investigated the physical mechanism responsible for these relationships between visibility, PM2.5 concentration and PBL height through typical haze and fog-haze mixed event and found that a double inversion layer formed in both typical events and played critical roles in maintaining and enhancing the long-lasting polluted events. The variations of the double inversion layers were closely associated with the processes of long-wave radiation cooling in the nighttime and short-wave solar radiation reduction in the daytime. The upper-level stable inversion layer was formed by the persistent warm and humid southwestern airflow, while the low-level inversion layer was initially produced by the surface long-wave radiation cooling in the nighttime and maintained by the reduction of surface solar radiation in the daytime. The obvious descending process of the upper-level inversion layer induced by the radiation process could be responsible for the enhancement of the low-level inversion layer and the lowering PBL height, as well as high aerosol loading for these polluted events. The reduction of surface solar radiation in the daytime could be around 35 % for the haze event and 94 % for the fog-haze mixed event. Therefore, the formation and subsequent descending processes of the upper-level inversion layer should be an important factor in maintaining and strengthening the long-lasting severe polluted events, which has not been revealed in previous publications. The interactions and feedbacks between PM2.5 concentration and PBL height linked by radiation process caused a more significant and long-lasting deterioration of air quality and visibility in fog-haze mixed events. The interactions and feedbacks of all processes were particularly strong when the PM2.5 mass concentration was larger than 150-200 µg m-3.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation
1990-05-01
process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cao, Boqiang; Zhang, Qimin; Ye, Ming
2016-11-29
We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.
Confronting quasi-exponential inflation with WMAP seven
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Barun Kumar; Pal, Supratik; Basu, B., E-mail: barunp1985@rediffmail.com, E-mail: pal@th.physik.uni-bonn.de, E-mail: banasri@isical.ac.in
2012-04-01
We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK.
NASA Astrophysics Data System (ADS)
Hayat, Tanzila; Nadeem, S.
2018-03-01
This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisiger, R.A.; Mendel, C.M.; Cavalieri, R.R.
1986-03-01
Two general models have been proposed for predicting the effects of metabolism, protein binding, and plasma flow on the removal of drugs by the liver. These models differ in the degree of plasma mixing assumed to exist within each hepatic sinusoid. The venous equilibrium model treats the sinusoid as a single well-stirred compartment, whereas the sinusoidal model effectively breaks up the sinusoid into a large number of sequentially perfused compartments which do not exchange their contents except through plasma flow. As a consequence, the sinusoidal model, but not the venous equilibrium model, predicts that the concentration of highly extracted drugsmore » will decline as the plasma flows through the hepatic lobule. To determine which of these alternative models best describes the hepatic uptake process, we looked for evidence that concentration gradients are formed during the uptake of (/sup 125/I)thyroxine by the perfused rat liver. Autoradiography of tissue slices after perfusion of the portal vein at physiologic flow rates with protein-free buffer containing (/sup 125/I)thyroxine demonstrated a rapid exponential fall in grain density with distance from the portal venule, declining by half for each 8% of the mean length of the sinusoid. Reversing the direction of perfusate flow reversed the direction of the autoradiographic gradients, indicating that they primarily reflect differences in the concentration of thyroxine within the hepatic sinusoids rather than differences in the uptake capacity of portal and central hepatocytes. Analysis of the data using models in which each sinusoid was represented by different numbers of sequentially perfused compartments (1-20) indicated that at least eight compartments were necessary to account for the magnitude of the gradients seen.« less
Bacterial Associates Modify Growth Dynamics of the Dinoflagellate Gymnodinium catenatum
Bolch, Christopher J. S.; Bejoy, Thaila A.; Green, David H.
2017-01-01
Marine phytoplankton cells grow in close association with a complex microbial associate community known to affect the growth, behavior, and physiology of the algal host. The relative scale and importance these effects compared to other major factors governing algal cell growth remain unclear. Using algal-bacteria co-culture models based on the toxic dinoflagellate Gymnodinium catenatum, we tested the hypothesis that associate bacteria exert an independent effect on host algal cell growth. Batch co-cultures of G. catenatum were grown under identical environmental conditions with simplified bacterial communities composed of one-, two-, or three-bacterial associates. Modification of the associate community membership and complexity induced up to four-fold changes in dinoflagellate growth rate, equivalent to the effect of a 5°C change in temperature or an almost six-fold change in light intensity (20–115 moles photons PAR m-2 s-1). Almost three-fold changes in both stationary phase cell concentration and death rate were also observed. Co-culture with Roseobacter sp. DG874 reduced dinoflagellate exponential growth rate and led to a more rapid death rate compared with mixed associate community controls or co-culture with either Marinobacter sp. DG879, Alcanivorax sp. DG881. In contrast, associate bacteria concentration was positively correlated with dinoflagellate cell concentration during the exponential growth phase, indicating growth was limited by supply of dinoflagellate-derived carbon. Bacterial growth increased rapidly at the onset of declining and stationary phases due to either increasing availability of algal-derived carbon induced by nutrient stress and autolysis, or at mid-log phase in Roseobacter co-cultures potentially due to the onset of bacterial-mediated cell lysis. Co-cultures with the three bacterial associates resulted in dinoflagellate and bacterial growth dynamics very similar to more complex mixed bacterial community controls, suggesting that three-way co-cultures are sufficient to model interaction and growth dynamics of more complex communities. This study demonstrates that algal associate bacteria independently modify the growth of the host cell under non-limiting growth conditions and supports the concept that algal–bacterial interactions are an important structuring mechanism in phytoplankton communities. PMID:28469613
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
Verification of the exponential model of body temperature decrease after death in pigs.
Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal
2005-09-01
The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.
NASA Astrophysics Data System (ADS)
Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min
2017-11-01
The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n = 31), spleens (n = 31) and kidneys (n = 31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.
2014-01-01
Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318
Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.
2017-01-01
Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400
Stavn, R H
1988-01-15
The role of the Lambert-Beer law in ocean optics is critically examined. The Lambert-Beer law and the three-parameter model of the submarine light field are used to construct an optical energy budget for any hydrosol. It is further applied to the analytical exponential decay coefficient of the light field and used to estimate the optical properties and effects of the dissolved/suspended component in upper ocean layers. The concepts of the empirical exponential decay coefficient (diffuse attenuation coefficient) of the light field and a constant exponential decay coefficient for molecular water are analyzed quantitatively. A constant exponential decay coefficient for water is rejected. The analytical exponential decay coefficient is used to analyze optical gradients in ocean waters.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-09-21
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Recurrence time statistics for finite size intervals
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.
2004-12-01
We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.
Lai, K P K; Dolan, K D; Ng, P K W
2009-06-01
Thermal and moisture effects on grape anthocyanin degradation were investigated using solid media to simulate processing at temperatures above 100 degrees C. Grape pomace (anthocyanin source) mixed with wheat pastry flour (1: 3, w/w dry basis) was used in both isothermal and nonisothermal experiments by heating the same mixture at 43% (db) initial moisture in steel cells in an oil bath at 80, 105, and 145 degrees C. To determine the effect of moisture on anthocyanin degradation, the grape pomace-wheat flour mixture was heated isothermally at 80 degrees C at constant moisture contents of 10%, 20%, and 43% (db). Anthocyanin degradation followed a pseudo first-order reaction with moisture. Anthocyanins degraded more rapidly with increasing temperature and moisture. The effects of temperature and moisture on the rate constant were modeled according to the Arrhenius and an exponential relationship, respectively. The nonisothermal reaction rate constant and activation energy (mean +/- standard error) were k(80 degrees C, 43% (db) moisture) = 2.81 x 10(-4)+/- 1.1 x 10(-6) s(-1) and DeltaE = 75273 +/- 197 J/g mol, respectively. The moisture parameter for the exponential model was 4.28 (dry basis moisture content)(-1). One possible application of this study is as a tool to predict the loss of anthocyanins in nutraceutical products containing grape pomace. For example, if the process temperature history and moisture history in an extruded snack fortified with grape pomace is known, the percentage anthocyanin loss can be predicted.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Fracture analysis of a central crack in a long cylindrical superconductor with exponential model
NASA Astrophysics Data System (ADS)
Zhao, Yu Feng; Xu, Chi
2018-05-01
The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.
A Numerical Study of Convection in a Condensing CO2 Atmosphere under Early Mars-Like Conditions
NASA Astrophysics Data System (ADS)
Nakajima, Kensuke; Yamashita, Tatsuya; Odaka, Masatsugu; Sugiyama, Ko-ichiro; Ishiwatari, Masaki; Nishizawa, Seiya; Takahashi, Yoshiyuki O.; Hayashi, Yoshi-Yuki
2017-10-01
Cloud convection of a CO2 atmosphere where the major constituent condenses is numerically investigated under a setup idealizing a possible warm atmosphere of early Mars, utilizing a two-dimensional cloud-resolving model forced by a fixed cooling profile as a substitute for a radiative process. The authors compare two cases with different critical saturation ratios as condensation criteria and also examine sensitivity to number mixing ratio of condensed particles given externally.When supersaturation is not necessary for condensation, the entire horizontal domain above the condensation level is continuously covered by clouds irrespective of number mixing ratio of condensed particles. Horizontal-mean cloud mass density decreases exponentially with height. The circulations below and above the condensation level are dominated by dry cellular convection and buoyancy waves, respectively.When 1.35 is adopted as the critical saturation ratio, clouds appear exclusively as intense, short-lived, quasi-periodic events. Clouds start just above the condensation level and develop upward, but intense updrafts exist only around the cloud top; they do not extend to the bottom of the condensation layer. The cloud layer is rapidly warmed by latent heat during the cloud events, and then the layer is slowly cooled by the specified thermal forcing, and supersaturation gradually develops leading to the next cloud event. The periodic appearance of cloud events does not occur when number mixing ratio of condensed particles is large.
NASA Astrophysics Data System (ADS)
Krugon, Seelam; Nagaraju, Dega
2017-05-01
This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.
Formal Methods for Cryptographic Protocol Analysis: Emerging Issues and Trends
2003-01-01
signatures , which depend upon the homomor- phic properties of RSA. Other algorithms and data structures, such as Chaum mixes [17], designed for...Communications Security, pages 176–185. ACM, Novem- ber 2001. [17] D. Chaum . Untraceable electronic mail, return addresses and digital signatures ...something like the Diffie- Hellman algorithm, which depends, as a minimum, on the commutative properties of exponentiation, or something like Chaum’s blinded
Bending and Force Recovery in Polymer Films and Microgel Formation
NASA Astrophysics Data System (ADS)
Elder, Theresa Marie
To determine correlation between geometry and material three different model films: polymethylsiloxane (PDMS), polystyrene (PS), and polycarbonate (PC), were singly bent and doubly bent (forming D-cones). Bends were chosen as they are fundamental in larger complex geometries such as origami and crumples. Bending was carried out between two plates taking force and displacement measurements. Processing of data using moment equations yielded values for bending moduli for studied films that were close to accepted values. Force recovery showed logarithmic trends for PDMS and stretched exponential trends for PS and PC. In a separate experiment a triblock copolymer of polystyrene-polyacrylic acid-polystyrene was subjected to different good and bad solvent mixing with any resulting particle morphology examined. Particles formed more uniformly with high water concentration, particles formed with high toluene concentration and agitation yielded three separate morphologies.
NASA Astrophysics Data System (ADS)
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
Zhu, Chaoyuan; Lin, Sheng Hsien
2006-07-28
Unified semiclasical solution for general nonadiabatic tunneling between two adiabatic potential energy surfaces is established by employing unified semiclassical solution for pure nonadiabatic transition [C. Zhu, J. Chem. Phys. 105, 4159 (1996)] with the certain symmetry transformation. This symmetry comes from a detailed analysis of the reduced scattering matrix for Landau-Zener type of crossing as a special case of nonadiabatic transition and nonadiabatic tunneling. Traditional classification of crossing and noncrossing types of nonadiabatic transition can be quantitatively defined by the rotation angle of adiabatic-to-diabatic transformation, and this rotational angle enters the analytical solution for general nonadiabatic tunneling. The certain two-state exponential potential models are employed for numerical tests, and the calculations from the present general nonadiabatic tunneling formula are demonstrated in very good agreement with the results from exact quantum mechanical calculations. The present general nonadiabatic tunneling formula can be incorporated with various mixed quantum-classical methods for modeling electronically nonadiabatic processes in photochemistry.
Obtaining the Grobner Initialization for the Ground Flash Fraction Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Solakiewicz, R.; Attele, R.; Koshak, W.
2011-01-01
At optical wavelengths and from the vantage point of space, the multiple scattering cloud medium obscures one's view and prevents one from easily determining what flashes strike the ground. However, recent investigations have made some progress examining the (easier, but still difficult) problem of estimating the ground flash fraction in a set of N flashes observed from space In the study by Koshak, a Bayesian inversion method was introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function of three variables (one of which is the ground flash fraction) was minimized by a numerical method. This method has formed the basis of a Ground Flash Fraction Retrieval Algorithm (GoFFRA) that is being tested as part of GOES-R GLM risk reduction.
On Revenue-Optimal Dynamic Auctions for Bidders with Interdependent Values
NASA Astrophysics Data System (ADS)
Constantin, Florin; Parkes, David C.
In a dynamic market, being able to update one's value based on information available to other bidders currently in the market can be critical to having profitable transactions. This is nicely captured by the model of interdependent values (IDV): a bidder's value can explicitly depend on the private information of other bidders. In this paper we present preliminary results about the revenue properties of dynamic auctions for IDV bidders. We adopt a computational approach to design single-item revenue-optimal dynamic auctions with known arrivals and departures but (private) signals that arrive online. In leveraging a characterization of truthful auctions, we present a mixed-integer programming formulation of the design problem. Although a discretization is imposed on bidder signals the solution is a mechanism applicable to continuous signals. The formulation size grows exponentially in the dependence of bidders' values on other bidders' signals. We highlight general properties of revenue-optimal dynamic auctions in a simple parametrized example and study the sensitivity of prices and revenue to model parameters.
NASA Astrophysics Data System (ADS)
Ivashchuk, V. D.; Ernazarov, K. K.
2017-01-01
A (n + 1)-dimensional gravitational model with cosmological constant and Gauss-Bonnet term is studied. The ansatz with diagonal cosmological metrics is adopted and solutions with exponential dependence of scale factors: ai ˜ exp (vit), i = 1, …, n, are considered. The stability analysis of the solutions with non-static volume factor is presented. We show that the solutions with v 1 = v 2 = v 3 = H > 0 and small enough variation of the effective gravitational constant G are stable if certain restriction on (vi ) is obeyed. New examples of stable exponential solutions with zero variation of G in dimensions D = 1 + m + 2 with m > 2 are presented.
NASA Astrophysics Data System (ADS)
Elmegreen, Bruce G.
2016-10-01
Exponential radial profiles are ubiquitous in spiral and dwarf Irregular galaxies, but the origin of this structural form is not understood. This talk will review the observations of exponential and double exponential disks, considering both the light and the mass profiles, and the contributions from stars and gas. Several theories for this structure will also be reviewed, including primordial collapse, bar and spiral torques, clump torques, galaxy interactions, disk viscosity and other internal processes of angular momentum exchange, and stellar scattering off of clumpy structure. The only process currently known that can account for this structure in the most theoretically difficult case is stellar scattering off disks clumps. Stellar orbit models suggest that such scattering can produce exponentials even in isolated dwarf irregulars that have no bars or spirals, little shear or viscosity, and profiles that go out too far for the classical Mestel case of primordial collapse with specific angular momentum conservation.
Effects of Preseason Training on the Sleep Characteristics of Professional Rugby League Players.
Thornton, Heidi R; Delaney, Jace A; Duthie, Grant M; Dascombe, Ben J
2018-02-01
To investigate the influence of daily and exponentially weighted moving training loads on subsequent nighttime sleep. Sleep of 14 professional rugby league athletes competing in the National Rugby League was recorded using wristwatch actigraphy. Physical demands were quantified using GPS technology, including total distance, high-speed distance, acceleration/deceleration load (SumAccDec; AU), and session rating of perceived exertion (AU). Linear mixed models determined effects of acute (daily) and subacute (3- and 7-d) exponentially weighted moving averages (EWMA) on sleep. Higher daily SumAccDec was associated with increased sleep efficiency (effect-size correlation; ES = 0.15; ±0.09) and sleep duration (ES = 0.12; ±0.09). Greater 3-d EWMA SumAccDec was associated with increased sleep efficiency (ES = 0.14; ±0.09) and an earlier bedtime (ES = 0.14; ±0.09). An increase in 7-d EWMA SumAccDec was associated with heightened sleep efficiency (ES = 0.15; ±0.09) and earlier bedtimes (ES = 0.15; ±0.09). The direction of the associations between training loads and sleep varied, but the strongest relationships showed that higher training loads increased various measures of sleep. Practitioners should be aware of the increased requirement for sleep during intensified training periods, using this information in the planning and implementation of training and individualized recovery modalities.
NASA Astrophysics Data System (ADS)
Gray, H. J.; Tucker, G. E.; Mahan, S.
2017-12-01
Luminescence is a property of matter that can be used to obtain depositional ages from fine sand. Luminescence generates due to exposure to background ionizing radiation and is removed by sunlight exposure in a process known as bleaching. There is evidence to suggest that luminescence can also serve as a sediment tracer in fluvial and hillslope environments. For hillslope environments, it has been suggested that the magnitude of luminescence as a function of soil depth is related to the strength of soil mixing. Hillslope soils with a greater extent of mixing will have previously surficial sand grains moved to greater depths in a soil column. These previously surface-exposed grains will contain a lower luminescence than those which have never seen the surface. To attempt to connect luminescence profiles with soil mixing rate, here defined as the soil vertical diffusivity, I conduct numerical modelling of particles in hillslope soils coupled with equations describing the physics of luminescence. I use recently published equations describing the trajectories of particles under both exponential and uniform soil velocity soils profiles and modify them to include soil diffusivity. Results from the model demonstrates a strong connection between soil diffusivity and luminescence. Both the depth profiles of luminescence and the total percent of surface exposed grains will change drastically based on the magnitude of the diffusivity. This suggests that luminescence could potentially be used to infer the magnitude of soil diffusivity. However, I test other variables such as the soil production rate, e-folding length of soil velocity, background dose rate, and soil thickness, and I find these other variables can also affect the relationship between luminescence and diffusivity. This suggests that these other variables may need to be constrained prior to any inferences of soil diffusivity from luminescence measurements. Further field testing of the model in areas where the soil vertical diffusivity and other parameters are independently known will provide a test of this potential new method.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Palombo, Marco; Gabrielli, Andrea; De Santis, Silvia; Capuani, Silvia
2012-03-01
In this paper, we investigate the image contrast that characterizes anomalous and non-gaussian diffusion images obtained using the stretched exponential model. This model is based on the introduction of the γ stretched parameter, which quantifies deviation from the mono-exponential decay of diffusion signal as a function of the b-value. To date, the biophysical substrate underpinning the contrast observed in γ maps, in other words, the biophysical interpretation of the γ parameter (or the fractional order derivative in space, β parameter) is still not fully understood, although it has already been applied to investigate both animal models and human brain. Due to the ability of γ maps to reflect additional microstructural information which cannot be obtained using diffusion procedures based on gaussian diffusion, some authors propose this parameter as a measure of diffusion heterogeneity or water compartmentalization in biological tissues. Based on our recent work we suggest here that the coupling between internal and diffusion gradients provide pseudo-superdiffusion effects which are quantified by the stretching exponential parameter γ. This means that the image contrast of Mγ maps reflects local magnetic susceptibility differences (Δχ(m)), thus highlighting better than T(2)(∗) contrast the interface between compartments characterized by Δχ(m). Thanks to this characteristic, Mγ imaging may represent an interesting tool to develop contrast-enhanced MRI for molecular imaging. The spectroscopic and imaging experiments (performed in controlled micro-beads dispersion) that are reported here, strongly suggest internal gradients, and as a consequence Δχ(m), to be an important factor in fully understanding the source of contrast in anomalous diffusion methods that are based on a stretched exponential model analysis of diffusion data obtained at varying gradient strengths g. Copyright © 2012 Elsevier Inc. All rights reserved.
SIR model on a dynamical network and the endemic state of an infectious disease
NASA Astrophysics Data System (ADS)
Dottori, M.; Fabricius, G.
2015-09-01
In this work we performed a numerical study of an epidemic model that mimics the endemic state of whooping cough in the pre-vaccine era. We considered a stochastic SIR model on dynamical networks that involve local and global contacts among individuals and analysed the influence of the network properties on the characterization of the quasi-stationary state. We computed probability density functions (PDF) for infected fraction of individuals and found that they are well fitted by gamma functions, excepted the tails of the distributions that are q-exponentials. We also computed the fluctuation power spectra of infective time series for different networks. We found that network effects can be partially absorbed by rescaling the rate of infective contacts of the model. An explicit relation between the effective transmission rate of the disease and the correlation of susceptible individuals with their infective nearest neighbours was obtained. This relation quantifies the known screening of infective individuals observed in these networks. We finally discuss the goodness and limitations of the SIR model with homogeneous mixing and parameters taken from epidemiological data to describe the dynamic behaviour observed in the networks studied.
Modeling sustainability in renewable energy supply chain systems
NASA Astrophysics Data System (ADS)
Xie, Fei
This dissertation aims at modeling sustainability of renewable fuel supply chain systems against emerging challenges. In particular, the dissertation focuses on the biofuel supply chain system design, and manages to develop advanced modeling framework and corresponding solution methods in tackling challenges in sustaining biofuel supply chain systems. These challenges include: (1) to integrate "environmental thinking" into the long-term biofuel supply chain planning; (2) to adopt multimodal transportation to mitigate seasonality in biofuel supply chain operations; (3) to provide strategies in hedging against uncertainty from conversion technology; and (4) to develop methodologies in long-term sequential planning of the biofuel supply chain under uncertainties. All models are mixed integer programs, which also involves multi-objective programming method and two-stage/multistage stochastic programming methods. In particular for the long-term sequential planning under uncertainties, to reduce the computational challenges due to the exponential expansion of the scenario tree, I also developed efficient ND-Max method which is more efficient than CPLEX and Nested Decomposition method. Through result analysis of four independent studies, it is found that the proposed modeling frameworks can effectively improve the economic performance, enhance environmental benefits and reduce risks due to systems uncertainties for the biofuel supply chain systems.
Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).
Namiki, C; Katsuragawa, M; Zani-Teixeira, M L
2015-04-01
The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.
1/f oscillations in a model of moth populations oriented by diffusive pheromones
NASA Astrophysics Data System (ADS)
Barbosa, L. A.; Martins, M. L.; Lima, E. R.
2005-01-01
An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Nonlinear dynamic evolution and control in CCFN with mixed attachment mechanisms
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Jianping; Han, Dun
2017-01-01
In recent years, wireless communication plays an important role in our lives. Cooperative communication, is used by a mobile station with single antenna to share with each other forming a virtual MIMO antenna system, will become a development with a diversity gain for wireless communication in tendency future. In this paper, a fitness model of evolution network based on complex networks with mixed attachment mechanisms is devised in order to study an actual network-CCFN (cooperative communication fitness network). Firstly, the evolution of CCFN is given by four cases with different probabilities, and the rate equations of nodes degree are presented to analyze the evolution of CCFN. Secondly, the degree distribution is analyzed by calculating the rate equation and numerical simulation with the examples of four fitness distributions such as power law, uniform fitness distribution, exponential fitness distribution and Rayleigh fitness distribution. Finally, the robustness of CCFN is studied by numerical simulation with four fitness distributions under random attack and intentional attack to analyze the effects of degree distribution, average path length and average degree. The results of this paper offers insights for building CCFN systems in order to program communication resources.
The Use of Modeling Approach for Teaching Exponential Functions
NASA Astrophysics Data System (ADS)
Nunes, L. F.; Prates, D. B.; da Silva, J. M.
2017-12-01
This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
Event-driven simulations of nonlinear integrate-and-fire neurons.
Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique
2007-12-01
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.
A non-Gaussian option pricing model based on Kaniadakis exponential deformation
NASA Astrophysics Data System (ADS)
Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara
2017-09-01
A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.
NASA Astrophysics Data System (ADS)
Fox, J. B.; Thayer, D. W.; Phillips, J. G.
The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.
Estimation of renal allograft half-life: fact or fiction?
Azancot, M Antonieta; Cantarell, Carme; Perelló, Manel; Torres, Irina B; Serón, Daniel; Seron, Daniel; Moreso, Francesc; Arias, Manuel; Campistol, Josep M; Curto, Jordi; Hernandez, Domingo; Morales, José M; Sanchez-Fructuoso, Ana; Abraira, Victor
2011-09-01
Renal allograft half-life time (t½) is the most straightforward representation of long-term graft survival. Since some statistical models overestimate this parameter, we compare different approaches to evaluate t½. Patients with a 1-year functioning graft transplanted in Spain during 1990, 1994, 1998 and 2002 were included. Exponential, Weibull, gamma, lognormal and log-logistic models censoring the last year of follow-up were evaluated. The goodness of fit of these models was evaluated according to the Cox-Snell residuals and the Akaike's information criterion (AIC) was employed to compare these models. We included 4842 patients. Real t½ in 1990 was 14.2 years. Median t½ (95% confidence interval) in 1990 and 2002 was 15.8 (14.2-17.5) versus 52.6 (35.6-69.5) according to the exponential model (P < 0.001). No differences between 1990 and 2002 were observed when t½ was estimated with the other models. In 1990 and 2002, t½ was 14.0 (13.1-15.0) versus 18.0 (13.7-22.4) according to Weibull, 15.5 (13.9-17.1) versus 19.1 (15.6-22.6) according to gamma, 14.4 (13.3-15.6) versus 18.3 (14.2-22.3) according to the log-logistic and 15.2 (13.8-16.6) versus 18.8 (15.3-22.3) according to the lognormal models. The AIC confirmed that the exponential model had the lowest goodness of fit, while the other models yielded a similar result. The exponential model overestimates t½, especially in cohorts of patients with a short follow-up, while any of the other studied models allow a better estimation even in cohorts with short follow-up.
Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello
2018-04-22
A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.
Modeling the Role of Dislocation Substructure During Class M and Exponential Creep. Revised
NASA Technical Reports Server (NTRS)
Raj, S. V.; Iskovitz, Ilana Seiden; Freed, A. D.
1995-01-01
The different substructures that form in the power-law and exponential creep regimes for single phase crystalline materials under various conditions of stress, temperature and strain are reviewed. The microstructure is correlated both qualitatively and quantitatively with power-law and exponential creep as well as with steady state and non-steady state deformation behavior. These observations suggest that creep is influenced by a complex interaction between several elements of the microstructure, such as dislocations, cells and subgrains. The stability of the creep substructure is examined in both of these creep regimes during stress and temperature change experiments. These observations are rationalized on the basis of a phenomenological model, where normal primary creep is interpreted as a series of constant structure exponential creep rate-stress relationships. The implications of this viewpoint on the magnitude of the stress exponent and steady state behavior are discussed. A theory is developed to predict the macroscopic creep behavior of a single phase material using quantitative microstructural data. In this technique the thermally activated deformation mechanisms proposed by dislocation physics are interlinked with a previously developed multiphase, three-dimensional. dislocation substructure creep model. This procedure leads to several coupled differential equations interrelating macroscopic creep plasticity with microstructural evolution.
Kartalis, Nikolaos; Manikis, Georgios C; Loizou, Louiza; Albiin, Nils; Zöllner, Frank G; Del Chiaro, Marco; Marias, Kostas; Papanikolaou, Nikolaos
2016-01-01
To compare two Gaussian diffusion-weighted MRI (DWI) models including mono-exponential and bi-exponential, with the non-Gaussian kurtosis model in patients with pancreatic ductal adenocarcinoma. After written informed consent, 15 consecutive patients with pancreatic ductal adenocarcinoma underwent free-breathing DWI (1.5T, b-values: 0, 50, 150, 200, 300, 600 and 1000 s/mm 2 ). Mean values of DWI-derived metrics ADC, D, D*, f, K and D K were calculated from multiple regions of interest in all tumours and non-tumorous parenchyma and compared. Area under the curve was determined for all metrics. Mean ADC and D K showed significant differences between tumours and non-tumorous parenchyma (both P < 0.001). Area under the curve for ADC, D, D*, f, K, and D K were 0.77, 0.52, 0.53, 0.62, 0.42, and 0.84, respectively. ADC and D K could differentiate tumours from non-tumorous parenchyma with the latter showing a higher diagnostic accuracy. Correction for kurtosis effects has the potential to increase the diagnostic accuracy of DWI in patients with pancreatic ductal adenocarcinoma.
NASA Astrophysics Data System (ADS)
Cao, Jinde; Wang, Yanyan
2010-05-01
In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warshaw, S I
2001-07-15
In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less
Division of Labor, Bet Hedging, and the Evolution of Mixed Biofilm Investment Strategies.
Lowery, Nick Vallespir; McNally, Luke; Ratcliff, William C; Brown, Sam P
2017-08-08
Bacterial cells, like many other organisms, face a tradeoff between longevity and fecundity. Planktonic cells are fast growing and fragile, while biofilm cells are often slower growing but stress resistant. Here we ask why bacterial lineages invest simultaneously in both fast- and slow-growing types. We develop a population dynamic model of lineage expansion across a patchy environment and find that mixed investment is favored across a broad range of environmental conditions, even when transmission is entirely via biofilm cells. This mixed strategy is favored because of a division of labor where exponentially dividing planktonic cells can act as an engine for the production of future biofilm cells, which grow more slowly. We use experimental evolution to test our predictions and show that phenotypic heterogeneity is persistent even under selection for purely planktonic or purely biofilm transmission. Furthermore, simulations suggest that maintenance of a biofilm subpopulation serves as a cost-effective hedge against environmental uncertainty, which is also consistent with our experimental findings. IMPORTANCE Cell types specialized for survival have been observed and described within clonal bacterial populations for decades, but why are these specialists continually produced under benign conditions when such investment comes at a high reproductive cost? Conversely, when survival becomes an imperative, does it ever benefit the population to maintain a pool of rapidly growing but vulnerable planktonic cells? Using a combination of mathematical modeling, simulations, and experiments, we find that mixed investment strategies are favored over a broad range of environmental conditions and rely on a division of labor between cell types, where reproductive specialists amplify survival specialists, which can be transmitted through the environment with a limited mortality rate. We also show that survival specialists benefit rapidly growing populations by serving as a hedge against unpredictable changes in the environment. These results help to clarify the general evolutionary and ecological forces that can generate and maintain diverse subtypes within clonal bacterial populations. Copyright © 2017 Lowery et al.
Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A
2001-09-01
We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Kumar, M Praveen; Patil, Suneel G; Dheeraj, Bhandari; Reddy, Keshav; Goel, Dinker; Krishna, Gopi
2015-01-01
Background: The difficulty in obtaining an acceptable impression increases exponentially as the number of abutments increases. Accuracy of the impression material and the use of a suitable impression technique are of utmost importance in the fabrication of a fixed partial denture. This study compared the accuracy of the matrix impression system with conventional putty reline and multiple mix technique for individual dies by comparing the inter-abutment distance in the casts obtained from the impressions. Materials and Methods: Three groups, 10 impressions each with three impression techniques (matrix impression system, putty reline technique and multiple mix technique) were made of a master die. Typodont teeth were embedded in a maxillary frasaco model base. The left first premolar was removed to create a three-unit fixed partial denture situation and the left canine and second premolar were prepared conservatively, and hatch marks were made on the abutment teeth. The final casts obtained from the impressions were examined under a profile projector and the inter-abutment distance was calculated for all the casts and compared. Results: The results from this study showed that in the mesiodistal dimensions the percentage deviation from master model in Group I was 0.1 and 0.2, in Group II was 0.9 and 0.3, and Group III was 1.6 and 1.5, respectively. In the labio-palatal dimensions the percentage deviation from master model in Group I was 0.01 and 0.4, Group II was 1.9 and 1.3, and Group III was 2.2 and 2.0, respectively. In the cervico-incisal dimensions the percentage deviation from the master model in Group I was 1.1 and 0.2, Group II was 3.9 and 1.7, and Group III was 1.9 and 3.0, respectively. In the inter-abutment dimension of dies, percentage deviation from master model in Group I was 0.1, Group II was 0.6, and Group III was 1.0. Conclusion: The matrix impression system showed more accuracy of reproduction for individual dies when compared with putty reline technique and multiple mix technique in all the three directions, as well as the inter-abutment distance. PMID:26124599
Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce
2010-01-01
Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Exponential inflation with F (R ) gravity
NASA Astrophysics Data System (ADS)
Oikonomou, V. K.
2018-03-01
In this paper, we shall consider an exponential inflationary model in the context of vacuum F (R ) gravity. By using well-known reconstruction techniques, we shall investigate which F (R ) gravity can realize the exponential inflation scenario at leading order in terms of the scalar curvature, and we shall calculate the slow-roll indices and the corresponding observational indices, in the context of slow-roll inflation. We also provide some general formulas of the slow-roll and the corresponding observational indices in terms of the e -foldings number. In addition, for the calculation of the slow-roll and of the observational indices, we shall consider quite general formulas, for which it is not necessary for the assumption that all the slow-roll indices are much smaller than unity to hold true. Finally, we investigate the phenomenological viability of the model by comparing it with the latest Planck and BICEP2/Keck-Array observational data. As we demonstrate, the model is compatible with the current observational data for a wide range of the free parameters of the model.
Numerical Simulations of Thermobaric Explosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, A L; Bell, J B; Beckner, V E
2007-05-04
A Model of the energy evolution in thermobaric explosions is presented. It is based on the two-phase formulation: conservation laws for the gas and particle phases along with inter-phase interaction terms. It incorporates a Combustion Model based on the mass conservation laws for fuel, air and products; source/sink terms are treated in the fast-chemistry limit appropriate for such gas dynamic fields. The Model takes into account both the afterburning of the detonation products of the booster with air, and the combustion of the fuel (Al or TNT detonation products) with air. Numerical simulations were performed for 1.5-g thermobaric explosions inmore » five different chambers (volumes ranging from 6.6 to 40 liters and length-to-diameter ratios from 1 to 12.5). Computed pressure waveforms were very similar to measured waveforms in all cases - thereby proving that the Model correctly predicts the energy evolution in such explosions. The computed global fuel consumption {mu}(t) behaved as an exponential life function. Its derivative {dot {mu}}(t) represents the global rate of fuel consumption. It depends on the rate of turbulent mixing which controls the rate of energy release in thermobaric explosions.« less
NASA Astrophysics Data System (ADS)
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Hypersurface Homogeneous Cosmological Model in Modified Theory of Gravitation
NASA Astrophysics Data System (ADS)
Katore, S. D.; Hatkar, S. P.; Baxi, R. J.
2016-12-01
We study a hypersurface homogeneous space-time in the framework of the f (R, T) theory of gravitation in the presence of a perfect fluid. Exact solutions of field equations are obtained for exponential and power law volumetric expansions. We also solve the field equations by assuming the proportionality relation between the shear scalar (σ ) and the expansion scalar (θ ). It is observed that in the exponential model, the universe approaches isotropy at large time (late universe). The investigated model is notably accelerating and expanding. The physical and geometrical properties of the investigated model are also discussed.
Performance and state-space analyses of systems using Petri nets
NASA Technical Reports Server (NTRS)
Watson, James Francis, III
1992-01-01
The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.
Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei
2012-06-01
To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.
Scale Dependence of Spatiotemporal Intermittence of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Siddani, Ravi K.
2011-01-01
It is a common experience that rainfall is intermittent in space and time. This is reflected by the fact that the statistics of area- and/or time-averaged rain rate is described by a mixed distribution with a nonzero probability of having a sharp value zero. In this paper we have explored the dependence of the probability of zero rain on the averaging space and time scales in large multiyear data sets based on radar and rain gauge observations. A stretched exponential fannula fits the observed scale dependence of the zero-rain probability. The proposed formula makes it apparent that the space-time support of the rain field is not quite a set of measure zero as is sometimes supposed. We also give an ex.planation of the observed behavior in tenus of a simple probabilistic model based on the premise that rainfall process has an intrinsic memory.
Stochastic Model of Vesicular Sorting in Cellular Organelles
NASA Astrophysics Data System (ADS)
Vagne, Quentin; Sens, Pierre
2018-02-01
The proper sorting of membrane components by regulated exchange between cellular organelles is crucial to intracellular organization. This process relies on the budding and fusion of transport vesicles, and should be strongly influenced by stochastic fluctuations, considering the relatively small size of many organelles. We identify the perfect sorting of two membrane components initially mixed in a single compartment as a first passage process, and we show that the mean sorting time exhibits two distinct regimes as a function of the ratio of vesicle fusion to budding rates. Low ratio values lead to fast sorting but result in a broad size distribution of sorted compartments dominated by small entities. High ratio values result in two well-defined sorted compartments but sorting is exponentially slow. Our results suggest an optimal balance between vesicle budding and fusion for the rapid and efficient sorting of membrane components and highlight the importance of stochastic effects for the steady-state organization of intracellular compartments.
NASA Astrophysics Data System (ADS)
Tomlin, Ruben; Gomes, Susana; Pavliotis, Greg; Papageorgiou, Demetrios
2017-11-01
We consider a weakly nonlinear model for interfacial waves on three-dimensional thin films on inclined flat planes - the Kuramoto-Sivashinsky equation. The flow is driven by gravity, and is allowed to be overlying or hanging on the flat substrate. Blowing and suction controls are applied at the substrate surface. In this talk we explore the instability of the transverse modes for hanging arrangements, which are unbounded and grow exponentially. The structure of the equations allows us to construct optimal transverse controls analytically to prevent this transverse growth. In this case and the case of an overlying film, we additionally study the influence of controlling to non-trivial transverse states on the streamwise and mixed mode dynamics. Finally, we solve the full optimal control problem by deriving the first order necessary conditions for existence of an optimal control, and solving these numerically using the forward-backward sweep method.
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.
2017-01-01
Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161
Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos
2017-01-01
The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.
NASA Astrophysics Data System (ADS)
Aubry, Eric; Weber, Sylvain; Billard, Alain; Martin, Nicolas
2014-01-01
Silicon oxynitride thin films were sputter deposited by the reactive gas pulsing process. Pure silicon target was sputtered in Ar, N2 and O2 mixture atmosphere. Oxygen gas was periodically and solely introduced using exponential signals. In order to vary the injected O2 quantity in the deposition chamber during one pulse at constant injection time (TON), the tau mounting time τmou of the exponential signals was systematically changed for each deposition. Taking into account the real-time measurements of the discharge voltage and the I(O*)/I(Ar*) emission lines ratio, it is shown that the oscillations of the discharge voltage during the TON and TOFF times (injection of O2 stopped) are attributed to the preferential adsorption of the oxygen compared to that of the nitrogen. The sputtering mode alternates from a fully nitrided mode (TOFF time) to a mixed mode (nitrided and oxidized mode) during the TON time. For the highest injected O2 quantities, the mixed mode tends toward a fully oxidized mode due to an increase of the trapped oxygen on the target. The oxygen (nitrogen) concentration in the SiOxNy films similarly (inversely) varies as the oxygen is trapped. Moreover, measurements of the contamination speed of the Si target surface are connected to different behaviors of the process. At low injected O2 quantities, the nitrided mode predominates over the oxidized one during the TON time. It leads to the formation of Si3N4-yOy-like films. Inversely, the mixed mode takes place for high injected O2 quantities and the oxidized mode prevails against the nitrided one producing SiO2-xNx-like films.
Observational constraints on varying neutrino-mass cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.
We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.
Statistical independence of the initial conditions in chaotic mixing.
García de la Cruz, J M; Vassilicos, J C; Rossi, L
2017-11-01
Experimental evidence of the scalar convergence towards a global strange eigenmode independent of the scalar initial condition in chaotic mixing is provided. This convergence, underpinning the independent nature of chaotic mixing in any passive scalar, is presented by scalar fields with different initial conditions casting statistically similar shapes when advected by periodic unsteady flows. As the scalar patterns converge towards a global strange eigenmode, the scalar filaments, locally aligned with the direction of maximum stretching, as described by the Lagrangian stretching theory, stack together in an inhomogeneous pattern at distances smaller than their asymptotic minimum widths. The scalar variance decay becomes then exponential and independent of the scalar diffusivity or initial condition. In this work, mixing is achieved by advecting the scalar using a set of laminar flows with unsteady periodic topology. These flows, that resemble the tendril-whorl map, are obtained by morphing the forcing geometry in an electromagnetic free surface 2D mixing experiment. This forcing generates a velocity field which periodically switches between two concentric hyperbolic and elliptic stagnation points. In agreement with previous literature, the velocity fields obtained produce a chaotic mixer with two regions: a central mixing and an external extensional area. These two regions are interconnected through two pairs of fluid conduits which transfer clean and dyed fluid from the extensional area towards the mixing region and a homogenized mixture from the mixing area towards the extensional region.
Muñoz-Cuevas, Marina; Fernández, Pablo S; George, Susan; Pin, Carmen
2010-05-01
The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (a(w)) values. To model the duration of the lag phase, the dependence of the parameter h(0), which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or a(w) were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scalar field and time varying cosmological constant in f(R,T) gravity for Bianchi type-I universe
NASA Astrophysics Data System (ADS)
Singh, G. P.; Bishi, Binaya K.; Sahoo, P. K.
2016-04-01
In this article, we have analysed the behaviour of scalar field and cosmological constant in $f(R,T)$ theory of gravity. Here, we have considered the simplest form of $f(R,T)$ i.e. $f(R,T)=R+2f(T)$, where $R$ is the Ricci scalar and $T$ is the trace of the energy momentum tensor and explored the spatially homogeneous and anisotropic Locally Rotationally Symmetric (LRS) Bianchi type-I cosmological model. It is assumed that the Universe is filled with two non-interacting matter sources namely scalar field (normal or phantom) with scalar potential and matter contribution due to $f(R,T)$ action. We have discussed two cosmological models according to power law and exponential law of the volume expansion along with constant and exponential scalar potential as sub models. Power law models are compatible with normal (quintessence) and phantom scalar field whereas exponential volume expansion models are compatible with only normal (quintessence) scalar field. The values of cosmological constant in our models are in agreement with the observational results. Finally, we have discussed some physical and kinematical properties of both the models.
NASA Astrophysics Data System (ADS)
Pandolfi, Marco; Alastuey, Andrés; Pérez, Noemi; Reche, Cristina; Castro, Iria; Shatalov, Victor; Querol, Xavier
2016-09-01
In this work for the first time data from two twin stations (Barcelona, urban background, and Montseny, regional background), located in the northeast (NE) of Spain, were used to study the trends of the concentrations of different chemical species in PM10 and PM2.5 along with the trends of the PM10 source contributions from the positive matrix factorization (PMF) model. Eleven years of chemical data (2004-2014) were used for this study. Trends of both species concentrations and source contributions were studied using the Mann-Kendall test for linear trends and a new approach based on multi-exponential fit of the data. Despite the fact that different PM fractions (PM2.5, PM10) showed linear decreasing trends at both stations, the contributions of specific sources of pollutants and of their chemical tracers showed exponential decreasing trends. The different types of trends observed reflected the different effectiveness and/or time of implementation of the measures taken to reduce the concentrations of atmospheric pollutants. Moreover, the trends of the contributions of specific sources such as those related with industrial activities and with primary energy consumption mirrored the effect of the financial crisis in Spain from 2008. The sources that showed statistically significant downward trends at both Barcelona (BCN) and Montseny (MSY) during 2004-2014 were secondary sulfate, secondary nitrate, and V-Ni-bearing source. The contributions from these sources decreased exponentially during the considered period, indicating that the observed reductions were not gradual and consistent over time. Conversely, the trends were less steep at the end of the period compared to the beginning, thus likely indicating the attainment of a lower limit. Moreover, statistically significant decreasing trends were observed for the contributions to PM from the industrial/traffic source at MSY (mixed metallurgy and road traffic) and from the industrial (metallurgy mainly) source at BCN. These sources were clearly linked with anthropogenic activities, and the observed decreasing trends confirmed the effectiveness of pollution control measures implemented at European or regional/local levels. Conversely, at regional level, the contributions from sources mostly linked with natural processes, such as aged marine and aged organics, did not show statistically significant trends. The trends observed for the PM10 source contributions reflected the trends observed for the chemical tracers of these pollutant sources well.
NASA Astrophysics Data System (ADS)
Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.
2012-11-01
We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.
Looking for Connections between Linear and Exponential Functions
ERIC Educational Resources Information Center
Lo, Jane-Jane; Kratky, James L.
2012-01-01
Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…
A Parametric Model for Barred Equilibrium Beach Profiles
2014-05-10
to shallow water. Bodge (1992) and Komar and McDougal (1994) suggested an exponential form as a preferred solution that exhibited finite slope at the...applications. J. Coast. Res. 7, 53–84. Komar, P.D., McDougal ,W.G., 1994. The analysis of beach profiles and nearshore processes using the exponential beach
Progress report for a research program in theoretical high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldman, D.; Fried, H.M.; Jevicki, A.
This year's research has dealt with: superstrings in the early universe; the invisible axion emissions from SN1987A; quartic interaction in Witten's superstring field theory; W-boson associated multiplicity and the dual parton model; cosmic strings and galaxy formation; cosmic strings and baryogenesis; quark flavor mixing; p -- /bar p/ scattering at TeV energies; random surfaces; ordered exponentials and differential equations; initial value and back-reaction problems in quantum field theory; string field theory and Weyl invariance; the renormalization group and string field theory; the evolution of scalar fields in an inflationary universe, with and without the effects of gravitational perturbations; cosmic stringmore » catalysis of skyrmion decay; inflation and cosmic strings from dynamical symmetry breaking; the physic of flavor mixing; string-inspired cosmology; strings at high-energy densities and complex temperatures; the problem of non-locality in string theory; string statistical mechanics; large-scale structures with cosmic strings and neutrinos; the delta expansion for stochastic quantization; high-energy neutrino flux from ordinary cosmic strings; a physical picture of loop bremsstrahlung; cylindrically-symmetric solutions of four-dimensional sigma models; large-scale structure with hot dark matter and cosmic strings; the unitarization of the odderon; string thermodynamics and conservation laws; the dependence of inflationary-universe models on initial conditions; the delta expansion and local gauge invariance; particle physics and galaxy formation; chaotic inflation with metric and matter perturbations; grand-unified theories, galaxy formation, and large-scale structure; neutrino clustering in cosmic-string-induced wakes; and infrared approximations to nonlinear differential equations. 17 refs.« less
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Local perturbations perturb—exponentially-locally
NASA Astrophysics Data System (ADS)
De Roeck, W.; Schütz, M.
2015-06-01
We elaborate on the principle that for gapped quantum spin systems with local interaction, "local perturbations [in the Hamiltonian] perturb locally [the groundstate]." This principle was established by Bachmann et al. [Commun. Math. Phys. 309, 835-871 (2012)], relying on the "spectral flow technique" or "quasi-adiabatic continuation" [M. B. Hastings, Phys. Rev. B 69, 104431 (2004)] to obtain locality estimates with sub-exponential decay in the distance to the spatial support of the perturbation. We use ideas of Hamza et al. [J. Math. Phys. 50, 095213 (2009)] to obtain similarly a transformation between gapped eigenvectors and their perturbations that is local with exponential decay. This allows to improve locality bounds on the effect of perturbations on the low lying states in certain gapped models with a unique "bulk ground state" or "topological quantum order." We also give some estimate on the exponential decay of correlations in models with impurities where some relevant correlations decay faster than one would naively infer from the global gap of the system, as one also expects in disordered systems with a localized groundstate.
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
The size distribution of Pacific Seamounts
NASA Astrophysics Data System (ADS)
Smith, Deborah K.; Jordan, Thomas H.
1987-11-01
An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Exponential gain of randomness certified by quantum contextuality
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan
2017-04-01
We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.
NASA Astrophysics Data System (ADS)
Harman, C. J.
2015-12-01
Surface water hydrologic models are increasingly used to analyze the transport of solutes through the landscape, such as nitrate. However, many of these models cannot adequately capture the effect of groundwater flow paths, which can have long travel times and accumulate legacy contaminants, releasing them to streams over decades. If these long lag times are not accounted for, the short-term efficacy of management activities to reduce nitrogen loads may be overestimated. Models that adopt a simple 'well-mixed' assumption, leading to an exponential transit time distribution at steady state, cannot adequately capture the broadly skewed nature of groundwater transit times in typical watersheds. Here I will demonstrate how StorAge Selection functions can be used to capture the long lag times of groundwater in a typical subwatershed-based hydrologic model framework typical of models like SWAT, HSPF, HBV, PRMS and others. These functions can be selected and calibrated to reproduce historical data where available, but can also be fitted to the results of a steady-state groundwater transport model like MODFLOW/MODPATH, allowing those results to directly inform the parameterization of an unsteady surface water model. The long tails of the transit time distribution predicted by the groundwater model can then be completely captured by the surface water model. Examples of this application in the Chesapeake Bay watersheds and elsewhere will be given.
NASA Astrophysics Data System (ADS)
Mascaro, Giuseppe
2018-04-01
This study uses daily rainfall records of a dense network of 240 gauges in central Arizona to gain insights on (i) the variability of the seasonal distributions of rainfall extremes; (ii) how the seasonal distributions affect the shape of the annual distribution; and (iii) the presence of spatial patterns and orographic control for these distributions. For this aim, recent methodological advancements in peak-over-threshold analysis and application of the Generalized Pareto Distribution (GPD) were used to assess the suitability of the GPD hypothesis and improve the estimation of its parameters, while limiting the effect of short sample sizes. The distribution of daily rainfall extremes was found to be heavy-tailed (i.e., GPD shape parameter ξ > 0) during the summer season, dominated by convective monsoonal thunderstorms. The exponential distribution (a special case of GPD with ξ = 0) was instead showed to be appropriate for modeling wintertime daily rainfall extremes, mainly caused by cold fronts transported by westerly flow. The annual distribution exhibited a mixed behavior, with lighter upper tails than those found in summer. A hybrid model mixing the two seasonal distributions was demonstrated capable of reproducing the annual distribution. Organized spatial patterns, mainly controlled by elevation, were observed for the GPD scale parameter, while ξ did not show any clear control of location or orography. The quantiles returned by the GPD were found to be very similar to those provided by the National Oceanic and Atmospheric Administration (NOAA) Atlas 14, which used the Generalized Extreme Value (GEV) distribution. Results of this work are useful to improve statistical modeling of daily rainfall extremes at high spatial resolution and provide diagnostic tools for assessing the ability of climate models to simulate extreme events.
Solving the small-scale structure puzzles with dissipative dark matter
NASA Astrophysics Data System (ADS)
Foot, Robert; Vagnozzi, Sunny
2016-07-01
Small-scale structure is studied in the context of dissipative dark matter, arising for instance in models with a hidden unbroken Abelian sector, so that dark matter couples to a massless dark photon. The dark sector interacts with ordinary matter via gravity and photon-dark photon kinetic mixing. Mirror dark matter is a theoretically constrained special case where all parameters are fixed except for the kinetic mixing strength, epsilon. In these models, the dark matter halo around spiral and irregular galaxies takes the form of a dissipative plasma which evolves in response to various heating and cooling processes. It has been argued previously that such dynamics can account for the inferred cored density profiles of galaxies and other related structural features. Here we focus on the apparent deficit of nearby small galaxies (``missing satellite problem"), which these dissipative models have the potential to address through small-scale power suppression by acoustic and diffusion damping. Using a variant of the extended Press-Schechter formalism, we evaluate the halo mass function for the special case of mirror dark matter. Considering a simplified model where Mbaryons propto Mhalo, we relate the halo mass function to more directly observable quantities, and find that for epsilon ≈ 2 × 10-10 such a simplified description is compatible with the measured galaxy luminosity and velocity functions. On scales Mhalo lesssim 108 Msolar, diffusion damping exponentially suppresses the halo mass function, suggesting a nonprimordial origin for dwarf spheroidal satellite galaxies, which we speculate were formed via a top-down fragmentation process as the result of nonlinear dissipative collapse of larger density perturbations. This could explain the planar orientation of satellite galaxies around Andromeda and the Milky Way.
NASA Astrophysics Data System (ADS)
Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.
2017-12-01
Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Zhao, Kaihong
2018-12-01
In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.
A mechanical model of bacteriophage DNA ejection
NASA Astrophysics Data System (ADS)
Arun, Rahul; Ghosal, Sandip
2017-08-01
Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.
A new approach to the extraction of single exponential diode model parameters
NASA Astrophysics Data System (ADS)
Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.
2018-06-01
A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.
ERIC Educational Resources Information Center
Casstevens, Thomas W.; And Others
This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Optimized open-flow mixing: insights from microbubble streaming
NASA Astrophysics Data System (ADS)
Rallabandi, Bhargav; Wang, Cheng; Guo, Lin; Hilgenfeldt, Sascha
2015-11-01
Microbubble streaming has been developed into a robust and powerful flow actuation technique in microfluidics. Here, we study it as a paradigmatic system for microfluidic mixing under a continuous throughput of fluid (open-flow mixing), providing a systematic optimization of the device parameters in this practically important situation. Focusing on two-dimensional advective stirring (neglecting diffusion), we show through numerical simulation and analytical theory that mixing in steady streaming vortices becomes ineffective beyond a characteristic time scale, necessitating the introduction of unsteadiness. By duty cycling the streaming, such unsteadiness is introduced in a controlled fashion, leading to exponential refinement of the advection structures. The rate of refinement is then optimized for particular parameters of the time modulation, i.e. a particular combination of times for which the streaming is turned ``on'' and ``off''. The optimized protocol can be understood theoretically using the properties of the streaming vortices and the throughput Poiseuille flow. We can thus infer simple design principles for practical open flow micromixing applications, consistent with experiments. Current Address: Mechanical and Aerospace Engineering, Princeton University.
Exponential Stellar Disks in Low Surface Brightness Galaxies: A Critical Test of Viscous Evolution
NASA Astrophysics Data System (ADS)
Bell, Eric F.
2002-12-01
Viscous redistribution of mass in Milky Way-type galactic disks is an appealing way of generating an exponential stellar profile over many scale lengths, almost independent of initial conditions, requiring only that the viscous timescale and star formation timescale are approximately equal. However, galaxies with solid-body rotation curves cannot undergo viscous evolution. Low surface brightness (LSB) galaxies have exponential surface brightness profiles, yet have slowly rising, nearly solid-body rotation curves. Because of this, viscous evolution may be inefficient in LSB galaxies: the exponential profiles, instead, would give important insight into initial conditions for galaxy disk formation. Using star formation laws from the literature and tuning the efficiency of viscous processes to reproduce an exponential stellar profile in Milky Way-type galaxies, I test the role of viscous evolution in LSB galaxies. Under the conservative and not unreasonable condition that LSB galaxies are gravitationally unstable for at least a part of their lives, I find that it is impossible to rule out a significant role for viscous evolution. This type of model still offers an attractive way of producing exponential disks, even in LSB galaxies with slowly rising rotation curves.
NASA Astrophysics Data System (ADS)
Wang, Meng; Lu, Baohong
2017-04-01
Nitrate is essential for the growth and survival of plants, animals and humans. However, excess nitrate in drinking water is regarded as a health hazard as it is linked to infant methemoglobinemia and esophageal cancer. Revealing nitrate characteristics and identifying its sources are fundamental for making effective water management strategies, but nitrate sources in multi-tributaries and mixed land covered watersheds remain unclear. It is difficult to determine the predominant NO3- sources using conventional water quality monitoring techniques. In our study, based on 20 surface water sampling sites for more than two years' monitoring from April 2012 to December 2014, water chemical and dual isotopic approaches (δ15N-NO3- and δ18O-NO3-) were integrated for the first time to evaluate nitrate characteristics and sources in the Huashan watershed, Jianghuai hilly region, East China. The results demonstrated that nitrate content in surface water was relatively low in the downstream (<10 mg/L), but spatial heterogeneities were remarkable among different sub-watersheds. Extremely high nitrate was observed at the source of the river in one of the sub-watersheds, which exhibited an exponential decline along the stream due to dilution, absorption by aquatic plants, and high forest cover. Although dramatically decline of nitrate occurred along the stream, denitrification was not found in surface water by analyzing δ15N-NO3- and δ18O-NO3- relationship. Proportional contributions of five potential nitrate sources (i.e., precipitation; manure and sewage; soil nitrogen; nitrate fertilizer; nitrate derived from ammonia fertilizer and rainfall) were estimated using a Bayesian isotope mixing model. Model results indicated nitrate sources varied significantly among different rainfall conditions, land use types, as well as anthropologic activities. In summary, coupling dual isotopes of nitrate (δ15N-NO3- and δ18O-NO3-, simultaneously) with a Bayesian isotope mixing model offers a useful and practical way to qualitatively analyze nitrate sources and transformations as well as quantitatively estimate the contributions of potential nitrate sources in surface water. With the assessment of nitrate sources and characteristics, effective management strategies can be implemented to reduce N export and improve water quality in this region.
Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field
NASA Astrophysics Data System (ADS)
Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi
2018-02-01
We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.
Observational constraints on tachyonic chameleon dark energy model
NASA Astrophysics Data System (ADS)
Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.
2018-03-01
It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.
Cosmological models with a hybrid scale factor in an extended gravity theory
NASA Astrophysics Data System (ADS)
Mishra, B.; Tripathy, S. K.; Tarai, Sankarsan
2018-03-01
A general formalism to investigate Bianchi type V Ih universes is developed in an extended theory of gravity. A minimally coupled geometry and matter field is considered with a rescaled function of f(R,T) substituted in place of the Ricci scalar R in the geometrical action. Dynamical aspects of the models are discussed by using a hybrid scale factor (HSF) that behaves as power law in an initial epoch and as an exponential form at late epoch. The power law behavior and the exponential behavior appear as two extreme cases of the present model.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
NASA Astrophysics Data System (ADS)
Mert, Bayram Ali; Dag, Ahmet
2017-12-01
In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Locality of the Thomas-Fermi-von Weizsäcker Equations
NASA Astrophysics Data System (ADS)
Nazar, F. Q.; Ortner, C.
2017-06-01
We establish a pointwise stability estimate for the Thomas-Fermi-von Weiz-säcker (TFW) model, which demonstrates that a local perturbation of a nuclear arrangement results also in a local response in the electron density and electrostatic potential. The proof adapts the arguments for existence and uniqueness of solutions to the TFW equations in the thermodynamic limit by Catto et al. (The mathematical theory of thermodynamic limits: Thomas-Fermi type models. Oxford mathematical monographs. The Clarendon Press, Oxford University Press, New York, 1998). To demonstrate the utility of this combined locality and stability result we derive several consequences, including an exponential convergence rate for the thermodynamic limit, partition of total energy into exponentially localised site energies (and consequently, exponential locality of forces), and generalised and strengthened results on the charge neutrality of local defects.
A demographic study of the exponential distribution applied to uneven-aged forests
Jeffrey H. Gove
2016-01-01
A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...
Exponential Potential versus Dark Matter
1993-10-15
scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the
NASA Astrophysics Data System (ADS)
Pradas, Marc; Pumir, Alain; Huber, Greg; Wilkinson, Michael
2017-07-01
Chaos is widely understood as being a consequence of sensitive dependence upon initial conditions. This is the result of an instability in phase space, which separates trajectories exponentially. Here, we demonstrate that this criterion should be refined. Despite their overall intrinsic instability, trajectories may be very strongly convergent in phase space over extremely long periods, as revealed by our investigation of a simple chaotic system (a realistic model for small bodies in a turbulent flow). We establish that this strong convergence is a multi-facetted phenomenon, in which the clustering is intense, widespread and balanced by lacunarity of other regions. Power laws, indicative of scale-free features, characterize the distribution of particles in the system. We use large-deviation and extreme-value statistics to explain the effect. Our results show that the interpretation of the ‘butterfly effect’ needs to be carefully qualified. We argue that the combination of mixing and clustering processes makes our specific model relevant to understanding the evolution of simple organisms. Lastly, this notion of convergent chaos, which implies the existence of conditions for which uncertainties are unexpectedly small, may also be relevant to the valuation of insurance and futures contracts.
Referee Networks and Their Spectral Properties
NASA Astrophysics Data System (ADS)
Slanina, F.; Zhang, Y.-Ch.
2005-09-01
The bipartite graph connecting products and reviewers of that product is studied empirically in the case of amazon.com. We find that the network has power-law degree distribution on the side of reviewers, while on the side of products the distribution is better fitted by stretched exponential. The spectrum of normalised adjacency matrix shows power-law tail in the density of states. Establishing the community structures by finding localised eigenstates is not straightforward as the localised and delocalised states are mixed throughout the whole support of the spectrum.
Water resources in the next millennium
NASA Astrophysics Data System (ADS)
Wood, Warren
As pressures from an exponentially increasing population and economic expectations rise against a finite water resource, how do we address management? This was the main focus of the Dubai International Conference on Water Resources and Integrated Management in the Third Millennium in Dubai, United Arab Emirates, 2-6 February 2002. The invited forum attracted an eclectic mix of international thinkers from five continents. Presentations and discussions on hydrology policy/property rights, and management strategies focused mainly on problems of water supply, irrigation, and/or ecosystems.
NASA Astrophysics Data System (ADS)
Starn, J. J.; Belitz, K.; Carlson, C.
2017-12-01
Groundwater residence-time distributions (RTDs) are critical for assessing susceptibility of water resources to contamination. This novel approach for estimating regional RTDs was to first simulate groundwater flow using existing regional digital data sets in 13 intermediate size watersheds (each an average of 7,000 square kilometers) that are representative of a wide range of glacial systems. RTDs were simulated with particle tracking. We refer to these models as "general models" because they are based on regional, as opposed to site-specific, digital data. Parametric RTDs were created from particle RTDs by fitting 1- and 2-component Weibull, gamma, and inverse Gaussian distributions, thus reducing a large number of particle travel times to 3 to 7 parameters (shape, location, and scale for each component plus a mixing fraction) for each modeled area. The scale parameter of these distributions is related to the mean exponential age; the shape parameter controls departure from the ideal exponential distribution and is partly a function of interaction with bedrock and with drainage density. Given the flexible shape and mathematical similarity of these distributions, any of them are potentially a good fit to particle RTDs. The 1-component gamma distribution provided a good fit to basin-wide particle RTDs. RTDs at monitoring wells and streams often have more complicated shapes than basin-wide RTDs, caused in part by heterogeneity in the model, and generally require 2-component distributions. A machine learning model was trained on the RTD parameters using features derived from regionally available watershed characteristics such as recharge rate, material thickness, and stream density. RTDs appeared to vary systematically across the landscape in relation to watershed features. This relation was used to produce maps of useful metrics with respect to risk-based thresholds, such as the time to first exceedance, time to maximum concentration, time above the threshold (exposure time), and the time until last exceedance; thus, the parameters of groundwater residence time are measures of the intrinsic susceptibility of groundwater to contamination.
Order and anarchy hand in hand in 5D SO(10)
NASA Astrophysics Data System (ADS)
Vicino, D.
2015-07-01
A mechanism to generate flavour hierarchy via 5D wave-function localization is revisited in the context of SO(10) grand unified theory. In an extra-dimension compactified on an orbifold, fermions (living in the same 16 representation of SO(10)) result having exponential zero-modes profiles, localized around one of the brane. The breaking of SO(10) down to SU(5) × U(1)x provides the key parameter that distinguishes the profiles of the different SU(5) components inside the same 16 representation. Utilizing a suitable set of scalar fields, a predictive model for fermion masses and mixing is constructed and shown to be viable with the current data through a detailed numerical analysis. The scalar field content of the model is also suitable to solve the doublet-triplet splitting problem through the missing partner mechanism. All the Yukawa couplings in the model are anarchical and of order unity, while the hierarchies among different fermions result only from zero-mode profiles. The naturalness of Anarchical Yukawa couplings is studied, showing a preference for a normal ordered neutrino spectrum; predictions for various observables in the lepton sector are also derived.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Regionalizing nonparametric models of precipitation amounts on different temporal scales
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András
2017-05-01
Parametric distribution functions are commonly used to model precipitation amounts corresponding to different durations. The precipitation amounts themselves are crucial for stochastic rainfall generators and weather generators. Nonparametric kernel density estimates (KDEs) offer a more flexible way to model precipitation amounts. As already stated in their name, these models do not exhibit parameters that can be easily regionalized to run rainfall generators at ungauged locations as well as at gauged locations. To overcome this deficiency, we present a new interpolation scheme for nonparametric models and evaluate it for different temporal resolutions ranging from hourly to monthly. During the evaluation, the nonparametric methods are compared to commonly used parametric models like the two-parameter gamma and the mixed-exponential distribution. As water volume is considered to be an essential parameter for applications like flood modeling, a Lorenz-curve-based criterion is also introduced. To add value to the estimation of data at sub-daily resolutions, we incorporated the plentiful daily measurements in the interpolation scheme, and this idea was evaluated. The study region is the federal state of Baden-Württemberg in the southwest of Germany with more than 500 rain gauges. The validation results show that the newly proposed nonparametric interpolation scheme provides reasonable results and that the incorporation of daily values in the regionalization of sub-daily models is very beneficial.
Human population and atmospheric carbon dioxide growth dynamics: Diagnostics for the future
NASA Astrophysics Data System (ADS)
Hüsler, A. D.; Sornette, D.
2014-10-01
We analyze the growth rates of human population and of atmospheric carbon dioxide by comparing the relative merits of two benchmark models, the exponential law and the finite-time-singular (FTS) power law. The later results from positive feedbacks, either direct or mediated by other dynamical variables, as shown in our presentation of a simple endogenous macroeconomic dynamical growth model describing the growth dynamics of coupled processes involving human population (labor in economic terms), capital and technology (proxies by CO2 emissions). Human population in the context of our energy intensive economies constitutes arguably the most important underlying driving variable of the content of carbon dioxide in the atmosphere. Using some of the best databases available, we perform empirical analyses confirming that the human population on Earth has been growing super-exponentially until the mid-1960s, followed by a decelerated sub-exponential growth, with a tendency to plateau at just an exponential growth in the last decade with an average growth rate of 1.0% per year. In contrast, we find that the content of carbon dioxide in the atmosphere has continued to accelerate super-exponentially until 1990, with a transition to a progressive deceleration since then, with an average growth rate of approximately 2% per year in the last decade. To go back to CO2 atmosphere contents equal to or smaller than the level of 1990 as has been the broadly advertised goals of international treaties since 1990 requires herculean changes: from a dynamical point of view, the approximately exponential growth must not only turn to negative acceleration but also negative velocity to reverse the trend.
Zhang, Rui; Heins, David; Sanders, Mary; Guo, Beibei; Hogstrom, Kenneth
2018-05-10
The purpose of this study was to assess the potential benefits and limitations of a mixed beam therapy, which combined bolus electron conformal therapy (BECT) with intensity modulated photon radiotherapy (IMRT) and volumetric modulated photon arc therapy (VMAT), for left-sided post-mastectomy breast cancer patients. Mixed beam treatment plans were produced for nine post-mastectomy radiotherapy (PMRT) patients previously treated at our clinic with VMAT alone. The mixed beam plans consisted of 40 Gy to the chest wall area using BECT, 40 Gy to the supraclavicular area using parallel opposed IMRT, and 10 Gy to the total planning target volume (PTV) by optimizing VMAT on top of the BECT+IMRT dose distribution. The treatment plans were created in a commercial treatment planning system (TPS), and all plans were evaluated based on PTV coverage, dose homogeneity index (DHI), conformity index (CI), dose to organs at risk (OARs), normal tissue complication probability (NTCP), and secondary cancer complication probability (SCCP). The standard VMAT alone planning technique was used as the reference for comparison. Both techniques produced clinically acceptable PMRT plans but with a few significant differences: VMAT showed significantly better CI (0.70 vs. 0.53, p < 0.001) and DHI (0.12 vs. 0.20, p < 0.001) over mixed beam therapy. For normal tissues, mixed beam therapy showed better OAR sparing and significantly reduced NTCP for cardiac mortality (0.23% vs. 0.80%, p = 0.01) and SCCP for contralateral breast (1.7% vs. 3.1% based on linear model, and 1.2% vs. 1.9% based on linear-exponential model, p < 0.001 in both cases), but showed significantly higher mean (50.8 Gy vs. 49.3 Gy, p < 0.001) and maximum skin doses (59.7 Gy vs. 53.3 Gy, p < 0.001) compared with VMAT. Patients with more tissue (minimum distance between the distal PTV surface and lung approximately > 0.5 cm and volume of tissue between the distal PTV surface and heart or lung approximately > 250 cm 3 ) between distal PTV surface and lung may benefit the most from mixed beam therapy. This work has demonstrated that mixed beam therapy (BECT+IMRT : VMAT = 4 : 1) produces clinically acceptable plans having reduced OAR doses and risks of side effects compared with VMAT. Even though VMAT alone produces more homogenous and conformal dose distributions, mixed beam therapy remains as a viable option for treating post-mastectomy patients, possibly leading to reduced normal tissue complications. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
Tosun, İsmail
2012-01-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177
Tosun, Ismail
2012-03-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.
Sorption isotherm characteristics of aonla flakes.
Alam, Md Shafiq; Singh, Amarjit
2011-06-01
The equilibrium moisture content was determined for un-osmosed and osmosed (salt osmosed and sugar osmosed) aonla flakes using the static method at temperatures of 25, 40,50, 60 and 70 °C over a range of relative humidities from 20 to 90%. The sorption capacity of aonla decreased with an increase in temperature at constant water activity. The sorption isotherms exhibited hysteresis, in which the equilibrium moisture content was higher at a particular equilibrium relative humidity for desorption curve than for adsorption. The hysteresis effect was more pertinent for un-osmosed and salt osmosed samples in comparison to sugar osmosed samples. Five models namely the modified Chung Pfost, modified Halsey, modified Henderson, modified Exponential and Guggenheim-Anderson-de Boer (GAB) were evaluated to determine the best fit for the experimental data. For both adsorption and desorption process of aonla fruit, the equilibrium moisture content of un-osmosed and osmosed aonla samples can be predicted well by GAB model as well as modified Exponential model. Moreover, the modified Exponential model was found to be the best for describing the sorption behaviour of un-osmosed and salt osmosed samples while, GAB model for sugar osmosed aonla samples.
Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.
2018-01-01
Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184
NASA Astrophysics Data System (ADS)
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
{phi} meson production in Au + Au and p + p collisions at {radical}s{sub NN}=200 GeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, J.; Adler, C.; Aggarwal, M.M.
2004-06-01
We report the STAR measurement of {psi} meson production in Au + Au and p + p collisions at {radical}s{sub NN} = 200 GeV. Using the event mixing technique, the {psi} spectra and yields are obtained at midrapidity for five centrality bins in Au+Au collisions and for non-singly-diffractive p+p collisions. It is found that the {psi} transverse momentum distributions from Au+Au collisions are better fitted with a single-exponential while the p+p spectrum is better described by a double-exponential distribution. The measured nuclear modification factors indicate that {psi} production in central Au+Au collisions is suppressed relative to peripheral collisions when scaledmore » by the number of binary collisions ( versus centrality and the constant {psi}/K{sup -} ratio versus beam species, centrality, and collision energy rule out kaon coalescence as the dominant mechanism for {psi} production.« less
Wang, Dongshu; Huang, Lihong
2014-03-01
In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.
NMR study on the network structure of a mixed gel of kappa and iota carrageenans.
Hu, Bingjie; Du, Lei; Matsukawa, Shingo
2016-10-05
The temperature dependencies of the (1)H T2 and diffusion coefficient (D) of a mixed solution of kappa-carrageenan and iota-carrageenan were measured by NMR. Rheological and NMR measurements suggested an exponential formation of rigid aggregates of kappa-carrageenan and a gradual formation of fine aggregates of iota-carrageenan during two step increases of G'. The results also suggested that longer carrageenan chains are preferentially involved in aggregation, thus resulting in a decrease in the average Mw of solute carrageenans. The results of diffusion measurements for poly(ethylene oxide) (PEO) suggested that kappa-carrageenan formed thick aggregates that decreased hindrance to PEO diffusion by decreasing the solute kappa-carrageenan concentration in the voids of the aggregated chains, and that iota-carrageenan formed fine aggregates that decreased the solute iota-carrageenan concentration less. DPEO in a mixed solution of kappa-carrageenan and iota-carrageenan suggested two possibilities for the microscopic network structure: an interpenetrating network structure, or micro-phase separation. Copyright © 2016. Published by Elsevier Ltd.
Yang, Shiju; Li, Chuandong; Huang, Tingwen
2016-03-01
The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of Dibenzothiophene Desulfurization in a Recombinant Pseudomonas putida Strain▿
Calzada, Javier; Zamarro, María T.; Alcón, Almudena; Santos, Victoria E.; Díaz, Eduardo; García, José L.; Garcia-Ochoa, Felix
2009-01-01
Biodesulfurization was monitored in a recombinant Pseudomonas putida CECT5279 strain. DszB desulfinase activity reached a sharp maximum at the early exponential phase, but it rapidly decreased at later growth phases. A model two-step resting-cell process combining sequentially P. putida cells from the late and early exponential growth phases was designed to significantly increase biodesulfurization. PMID:19047400
Erik A. Lilleskov
2017-01-01
Fungal respiration contributes substantially to ecosystem respiration, yet its field temperature response is poorly characterized. I hypothesized that at diurnal time scales, temperature-respiration relationships would be better described by unimodal than exponential models, and at longer time scales both Q10 and mass-specific respiration at 10 °...
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
Porto, Markus; Roman, H Eduardo
2002-04-01
We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali
2010-01-01
The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985
Using phenomenological models for forecasting the 2015 Ebola challenge.
Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo
2018-03-01
The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jamaluddin, Fadhilah; Rahim, Rahela Abdul
2015-12-01
Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.
2015-01-01
This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857
Déjardin, P
2013-08-30
The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamimura, Atsushi; Kaneko, Kunihiko
2018-03-01
Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary for exponential growth at the primitive stage of life.
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-08-01
This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.
NASA Technical Reports Server (NTRS)
1971-01-01
A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
NASA Astrophysics Data System (ADS)
Yao, Weiping; Yang, Chaohui; Jing, Jiliang
2018-05-01
From the viewpoint of holography, we study the behaviors of the entanglement entropy in insulator/superconductor transition with exponential nonlinear electrodynamics (ENE). We find that the entanglement entropy is a good probe to the properties of the holographic phase transition. Both in the half space and the belt space, the non-monotonic behavior of the entanglement entropy in superconducting phase versus the chemical potential is general in this model. Furthermore, the behavior of the entanglement entropy for the strip geometry shows that the confinement/deconfinement phase transition appears in both insulator and superconductor phases. And the critical width of the confinement/deconfinement phase transition depends on the chemical potential and the exponential coupling term. More interestingly, the behaviors of the entanglement entropy in their corresponding insulator phases are independent of the exponential coupling factor but depends on the width of the subsystem A.
Quick-Mixing Studies Under Reacting Conditions
NASA Technical Reports Server (NTRS)
Leong, May Y.; Samuelsen, G. S.
1996-01-01
The low-NO(x) emitting potential of rich-burn/quick-mix/lean-burn )RQL) combustion makes it an attractive option for engines of future stratospheric aircraft. Because NO(x) formation is exponentially dependent on temperature, the success of the RQL combustor depends on minimizing high temperature stoichiometric pocket formation in the quick-mixing section. An experiment was designed and built, and tests were performed to characterize reaction and mixing properties of jets issuing from round orifices into a hot, fuel-rich crossflow confined in a cylindrical duct. The reactor operates on propane and presents a uniform, non-swirling mixture to the mixing modules. Modules consisting of round orifice configurations of 8, 9, 10, 12, 14, and 18 holes were evaluated at a momentum-flux ratio of 57 and jet-to-mainstream mass-flaw ratio of 2.5. Temperatures and concentrations of O2, CO2, CO, HC, and NO(x) were obtained upstream, down-stream, and within the orifice plane to determine jet penetration as well as reaction processes. Jet penetration was a function of the number of orifices and affected the mixing in the reacting system. Of the six configurations tested, the 14-hole module produced jet penetration close to the module half-radius and yielded the best mixing and most complete combustion at a plane one duct diameter from the orifice leading edge. The results reveal that substantial reaction and heat release occur in the jet mixing zone when the entering effluent is hot and rich, and that the experiment as designed will serve to explore satisfactorily jet mixing behavior under realistic reacting conditions in future studies.
Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad
2018-04-21
In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.
Exponential integration algorithms applied to viscoplasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Walker, Kevin P.
1991-01-01
Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
A rational approach to improving productivity in recombinant Pichia pastoris fermentation.
d'Anjou, M C; Daugulis, A J
2001-01-05
A Mut(S) Pichia pastoris strain that had been genetically modified to produce and secrete sea raven antifreeze protein was used as a model system to demonstrate the implementation of a rational, model-based approach to improve process productivity. A set of glycerol/methanol mixed-feed continuous stirred-tank reactor (CSTR) experiments was performed at the 5-L scale to characterize the relationship between the specific growth rate and the cell yield on methanol, the specific methanol consumption rate, the specific recombinant protein formation rate, and the productivity based on secreted protein levels. The range of dilution rates studied was 0. 01 to 0.10 h(-1), and the residual methanol concentration was kept constant at approximately 2 g/L (below the inhibitory level). With the assumption that the cell yield on glycerol was constant, the cell yield on methanol increased from approximately 0.5 to 1.5 over the range studied. A maximum specific methanol consumption rate of 20 mg/g. h was achieved at a dilution rate of 0.06 h(-1). The specific product formation rate and the volumetric productivity based on product continued to increase over the range of dilution rates studied, and the maximum values were 0.06 mg/g. h and 1.7 mg/L. h, respectively. Therefore, no evidence of repression by glycerol was observed over this range, and operating at the highest dilution rate studied maximized productivity. Fed-batch mass balance equations, based on Monod-type kinetics and parameters derived from data collected during the CSTR work, were then used to predict cell growth and recombinant protein production and to develop an exponential feeding strategy using two carbon sources. Two exponential fed-batch fermentations were conducted according to the predicted feeding strategy at specific growth rates of 0.03 h(-1) and 0.07 h(-1) to verify the accuracy of the model. Cell growth was accurately predicted in both fed-batch runs; however, the model underestimated recombinant product concentration. The overall volumetric productivity of both runs was approximately 2.2 mg/L. h, representing a tenfold increase in the productivity compared with a heuristic feeding strategy. Copyright 2001 John Wiley & Sons, Inc.
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.
Brette, Romain; Gerstner, Wulfram
2005-11-01
We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.
Fluid dynamics of the shock wave reactor
NASA Astrophysics Data System (ADS)
Masse, Robert Kenneth
2000-10-01
High commercial incentives have driven conventional olefin production technologies to near their material limits, leaving the possibility of further efficiency improvements only in the development of entirely new techniques. One strategy known as the Shock Wave Reactor, which employs gas dynamic processes to circumvent limitations of conventional reactors, has been demonstrated effective at the University of Washington. Preheated hydrocarbon feedstock and a high enthalpy carrier gas (steam) are supersonically mixed at a temperature below that required for thermal cracking. Temperature recovery is then effected via shock recompression to initiate pyrolysis. The evolution to proof-of-concept and analysis of experiments employing ethane and propane feedstocks are presented. The Shock Wave Reactor's high enthalpy steam and ethane flows severely limit diagnostic capability in the proof-of-concept experiment. Thus, a preliminary blow down supersonic air tunnel of similar geometry has been constructed to investigate recompression stability and (especially) rapid supersonic mixing necessary for successful operation of the Shock Wave Reactor. The mixing capabilities of blade nozzle arrays are therefore studied in the air experiment and compared with analytical models. Mixing is visualized through Schlieren imaging and direct photography of condensation in carbon dioxide injection, and interpretation of visual data is supported by pressure measurement and flow sampling. The influence of convective Mach number is addressed. Additionally, thermal behavior of a blade nozzle array is analyzed for comparison to data obtained in the course of succeeding proof-of-concept experiments. Proof-of-concept is naturally succeeded by interest in industrial adaptation of the Shock Wave Reactor, particularly with regard to issues involving the scaling and refinement of the shock recompression. Hence, an additional, variable geometry air tunnel has been constructed to study the parameter dependence of shock recompression in ducts. Distinct variation of the flow Reynolds and Mach numbers and section height allow unique mapping of each of these parameter dependencies. Agreement with a new one-dimensional model is demonstrated, predicting an exponential pressure profile characterized by two key parameters, the maximum pressure recovery and a characteristic length scale. Transition from one to two-dimensional dependence of the length parameter is observed as the duct aspect ratio varies significantly from unity.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Exponentiated power Lindley distribution.
Ashour, Samir K; Eltehiwy, Mahmoud A
2015-11-01
A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.
Voter model with non-Poissonian interevent intervals
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Masuda, Naoki
2011-09-01
Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
Kinetic and Stochastic Models of 1D yeast ``prions"
NASA Astrophysics Data System (ADS)
Kunes, Kay
2005-03-01
Mammalian prion proteins (PrP) are of public health interest because of mad cow and chronic wasting diseases. Yeasts have proteins, which can undergo similar reconformation and aggregation processes to PrP; yeast ``prions" are simpler to experimentally study and model. Recent in vitro studies of the SUP35 protein (1), showed long aggregates and pure exponential growth of the misfolded form. To explain this data, we have extended a previous model of aggregation kinetics along with our own stochastic approach (2). Both models assume reconformation only upon aggregation, and include aggregate fissioning and an initial nucleation barrier. We find for sufficiently small nucleation rates or seeding by small dimer concentrations that we can achieve the requisite exponential growth and long aggregates.
Pendulum Mass Affects the Measurement of Articular Friction Coefficient
Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.
2012-01-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223
Pendulum mass affects the measurement of articular friction coefficient.
Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C
2013-02-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
The multiple complex exponential model and its application to EEG analysis
NASA Astrophysics Data System (ADS)
Chen, Dao-Mu; Petzold, J.
The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.
A global renewable mix with proven technologies and common materials
NASA Astrophysics Data System (ADS)
Ballabrera, J.; Garcia-Olivares, A.; Garcia-Ladona, E.; Turiel, A.
2012-04-01
A global alternative mix to fossil fuels is proposed, based on proven renewable energy technologies that do not use scarce materials. Taking into account the availability of materials, the resulting mix consists of a combination of onshore and offshore wind turbines, concentrating solar power stations, hydroelectricity and wave power devices attached to the offshore turbines. Solar photovoltaic power could contribute to the mix if its dependence on scarce materials is solved. Material requirements are studied for the generation, power transport and for some future transport systems. The order of magnitude of copper, aluminium, neodymium, lithium, nickel, zinc and platinum that might be required for the proposed solution is obtained and compared with available reserves. While the proposed global alternative to fossil fuels seems technically feasible, lithium, nickel and platinum could become limiting materials for future vehicles fleet if no global recycling system were implemented and rechargeable zinc-air batteries could not be developed. As much as 60% of the current copper reserves would have to be employed in the implementation of the proposed solution. Altogether, the availability of materials may become a long-term physical constraint, preventing the continuation of the usual exponential growth of energy consumption.
Hayat, Tasawar; Ashraf, Muhammad Bilal; Alsulami, Hamed H.; Alhuthali, Muhammad Shahab
2014-01-01
The objective of present research is to examine the thermal radiation effect in three-dimensional mixed convection flow of viscoelastic fluid. The boundary layer analysis has been discussed for flow by an exponentially stretching surface with convective conditions. The resulting partial differential equations are reduced into a system of nonlinear ordinary differential equations using appropriate transformations. The series solutions are developed through a modern technique known as the homotopy analysis method. The convergent expressions of velocity components and temperature are derived. The solutions obtained are dependent on seven sundry parameters including the viscoelastic parameter, mixed convection parameter, ratio parameter, temperature exponent, Prandtl number, Biot number and radiation parameter. A systematic study is performed to analyze the impacts of these influential parameters on the velocity and temperature, the skin friction coefficients and the local Nusselt number. It is observed that mixed convection parameter in momentum and thermal boundary layers has opposite role. Thermal boundary layer is found to decrease when ratio parameter, Prandtl number and temperature exponent are increased. Local Nusselt number is increasing function of viscoelastic parameter and Biot number. Radiation parameter on the Nusselt number has opposite effects when compared with viscoelastic parameter. PMID:24608594
Hayat, Tasawar; Ashraf, Muhammad Bilal; Alsulami, Hamed H; Alhuthali, Muhammad Shahab
2014-01-01
The objective of present research is to examine the thermal radiation effect in three-dimensional mixed convection flow of viscoelastic fluid. The boundary layer analysis has been discussed for flow by an exponentially stretching surface with convective conditions. The resulting partial differential equations are reduced into a system of nonlinear ordinary differential equations using appropriate transformations. The series solutions are developed through a modern technique known as the homotopy analysis method. The convergent expressions of velocity components and temperature are derived. The solutions obtained are dependent on seven sundry parameters including the viscoelastic parameter, mixed convection parameter, ratio parameter, temperature exponent, Prandtl number, Biot number and radiation parameter. A systematic study is performed to analyze the impacts of these influential parameters on the velocity and temperature, the skin friction coefficients and the local Nusselt number. It is observed that mixed convection parameter in momentum and thermal boundary layers has opposite role. Thermal boundary layer is found to decrease when ratio parameter, Prandtl number and temperature exponent are increased. Local Nusselt number is increasing function of viscoelastic parameter and Biot number. Radiation parameter on the Nusselt number has opposite effects when compared with viscoelastic parameter.
The mixed reality of things: emerging challenges for human-information interaction
NASA Astrophysics Data System (ADS)
Spicer, Ryan P.; Russell, Stephen M.; Rosenberg, Evan Suma
2017-05-01
Virtual and mixed reality technology has advanced tremendously over the past several years. This nascent medium has the potential to transform how people communicate over distance, train for unfamiliar tasks, operate in challenging environments, and how they visualize, interact, and make decisions based on complex data. At the same time, the marketplace has experienced a proliferation of network-connected devices and generalized sensors that are becoming increasingly accessible and ubiquitous. As the "Internet of Things" expands to encompass a predicted 50 billion connected devices by 2020, the volume and complexity of information generated in pervasive and virtualized environments will continue to grow exponentially. The convergence of these trends demands a theoretically grounded research agenda that can address emerging challenges for human-information interaction (HII). Virtual and mixed reality environments can provide controlled settings where HII phenomena can be observed and measured, new theories developed, and novel algorithms and interaction techniques evaluated. In this paper, we describe the intersection of pervasive computing with virtual and mixed reality, identify current research gaps and opportunities to advance the fundamental understanding of HII, and discuss implications for the design and development of cyber-human systems for both military and civilian use.
Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin
2016-12-01
To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus
2014-01-01
In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Infinite-disorder critical points of models with stretched exponential interactions
NASA Astrophysics Data System (ADS)
Juhász, Róbert
2014-09-01
We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.
Global exponential stability for switched memristive neural networks with time-varying delays.
Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia
2016-08-01
This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Water, Energy, and Biogeochemical Model (WEBMOD), user’s manual, version 1
Webb, Richard M.T.; Parkhurst, David L.
2017-02-08
The Water, Energy, and Biogeochemical Model (WEBMOD) uses the framework of the U.S. Geological Survey (USGS) Modular Modeling System to simulate fluxes of water and solutes through watersheds. WEBMOD divides watersheds into model response units (MRU) where fluxes and reactions are simulated for the following eight hillslope reservoir types: canopy; snowpack; ponding on impervious surfaces; O-horizon; two reservoirs in the unsaturated zone, which represent preferential flow and matrix flow; and two reservoirs in the saturated zone, which also represent preferential flow and matrix flow. The reservoir representing ponding on impervious surfaces, currently not functional (2016), will be implemented once the model is applied to urban areas. MRUs discharge to one or more stream reservoirs that flow to the outlet of the watershed. Hydrologic fluxes in the watershed are simulated by modules derived from the USGS Precipitation Runoff Modeling System; the National Weather Service Hydro-17 snow model; and a topography-driven hydrologic model (TOPMODEL). Modifications to the standard TOPMODEL include the addition of heterogeneous vertical infiltration rates; irrigation; lateral and vertical preferential flows through the unsaturated zone; pipe flow draining the saturated zone; gains and losses to regional aquifer systems; and the option to simulate baseflow discharge by using an exponential, parabolic, or linear decrease in transmissivity. PHREEQC, an aqueous geochemical model, is incorporated to simulate chemical reactions as waters evaporate, mix, and react within the various reservoirs of the model. The reactions that can be specified for a reservoir include equilibrium reactions among water; minerals; surfaces; exchangers; and kinetic reactions such as kinetic mineral dissolution or precipitation, biologically mediated reactions, and radioactive decay. WEBMOD also simulates variations in the concentrations of the stable isotopes deuterium and oxygen-18 as a result of varying inputs, mixing, and evaporation. This manual describes the WEBMOD input and output files, along with the algorithms and procedures used to simulate the hydrology and water quality in a watershed. Examples are presented that demonstrate hydrologic processes, weathering reactions, and isotopic evolution in an alpine watershed and the effect of irrigation on water flows and salinity in an intensively farmed agricultural area.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
Placement of temperature probe in bovine vagina for continuous measurement of core-body temperature.
Lee, C N; Gebremedhin, K G; Parkhurst, A; Hillman, P E
2015-09-01
There has been increasing interest to measure core-body temperature in cattle using internal probes. This study examined the placement of HOBO water temperature probe with an anchor, referred to as the "sensor pack" (Hillman et al. Appl Eng Agric ASAE 25(2):291-296, 2009) in the vagina of multiparous Holstein cows under grazing conditions. Two types of anchors were used: (a) long "fingers" (4.5-6 cm), and (b) short "fingers" (3.5 cm). The long-finger anchors stayed in one position while the short-finger anchors were not stable in one position (rotate) within the vagina canal and in some cases came out. Vaginal temperatures were recorded every minute and the data collected were then analyzed using exponential mixed model regression for non-linear data. The results showed that the core-body temperatures for the short-finger anchors were lower than the long-finger anchors. This implied that the placement of the temperature sensor within the vagina cavity may affect the data collected.
Placement of temperature probe in bovine vagina for continuous measurement of core-body temperature
NASA Astrophysics Data System (ADS)
Lee, C. N.; Gebremedhin, K. G.; Parkhurst, A.; Hillman, P. E.
2015-09-01
There has been increasing interest to measure core-body temperature in cattle using internal probes. This study examined the placement of HOBO water temperature probe with an anchor, referred to as the "sensor pack" (Hillman et al. Appl Eng Agric ASAE 25(2):291-296, 2009) in the vagina of multiparous Holstein cows under grazing conditions. Two types of anchors were used: (a) long "fingers" (4.5-6 cm), and (b) short "fingers" (3.5 cm). The long-finger anchors stayed in one position while the short-finger anchors were not stable in one position (rotate) within the vagina canal and in some cases came out. Vaginal temperatures were recorded every minute and the data collected were then analyzed using exponential mixed model regression for non-linear data. The results showed that the core-body temperatures for the short-finger anchors were lower than the long-finger anchors. This implied that the placement of the temperature sensor within the vagina cavity may affect the data collected.
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Exponentially growing tearing modes in Rijnhuizen Tokamak Project plasmas.
Salzedas, F; Schüller, F C; Oomens, A A M
2002-02-18
The local measurement of the island width w, around the resonant surface, allowed a direct test of the extended Rutherford model [P. H. Rutherford, PPPL Report-2277 (1985)], describing the evolution of radiation-induced tearing modes prior to disruptions of tokamak plasmas. It is found that this model accounts very well for the observed exponential growth and supports radiation losses as being the main driving mechanism. The model implies that the effective perpendicular electron heat conductivity in the island is smaller than the global one. Comparison of the local measurements of w with the magnetic perturbed field B showed that w proportional to B1/2 was valid for widths up to 18% of the minor radius.
NASA Astrophysics Data System (ADS)
Adame, J.; Warzel, S.
2015-11-01
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adame, J.; Warzel, S., E-mail: warzel@ma.tum.de
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
Amthor, Stephan; Lambert, Christoph
2006-01-26
A series of [2.2]paracylophane-bridged bis-triarylamine mixed-valence (MV) radical cations were analyzed by a generalized Mulliken-Hush (GMH) three-level model which takes two transitions into account: the intervalence charge transfer (IV-CT) band which is assigned to an optically induced hole transfer (HT) from one triarylamine unit to the second one and a second band associated with a triarylamine radical cation to bridge (in particular, the [2.2]paracyclophane bridge) hole transfer. From the GMH analysis, we conclude that the [2.2]paracyclophane moiety is not the limiting factor which governs the intramolecular charge transfer. AM1-CISD calculations reveal that both through-bond as well as through-space interactions of the [2.2]paracyclophane bridge play an important role for hole transfer processes. These electronic interactions are of course smaller than direct pi-conjugation, but from the order of magnitude of the couplings of the [2.2]paracyclophane MV species, we assume that this bridge is able to mediate significant through-space and through-bond interactions and that the cyclophane bridge acts more like an unsaturated spacer rather than a saturated one. From the exponential dependence of the electronic coupling V between the two triarylamine localized states on the distance r between the two redox centers, we infer that the hole transfer occurs via a superexchange mechanism. Our analysis reveals that even significantly longer pi-conjugated bridges should still mediate significant electronic interactions because the decay constant beta of a series of pi-conjugated MV species is small.
Disentangling the f(R)-duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broy, Benedict J.; Pedro, Francisco G.; Westphal, Alexander
2015-03-16
Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1
Disentangling the f(R)-duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broy, Benedict J.; Westphal, Alexander; Pedro, Francisco G., E-mail: benedict.broy@desy.de, E-mail: francisco.pedro@desy.de, E-mail: alexander.westphal@desy.de
2015-03-01
Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
Testing predictions of the quantum landscape multiverse 2: the exponential inflationary potential
NASA Astrophysics Data System (ADS)
Di Valentino, Eleonora; Mersini-Houghton, Laura
2017-03-01
The 2015 Planck data release tightened the region of the allowed inflationary models. Inflationary models with convex potentials have now been ruled out since they produce a large tensor to scalar ratio. Meanwhile the same data offers interesting hints on possible deviations from the standard picture of CMB perturbations. Here we revisit the predictions of the theory of the origin of the universe from the landscape multiverse for the case of exponential inflation, for two reasons: firstly to check the status of the anomalies associated with this theory, in the light of the recent Planck data; secondly, to search for a counterexample whereby new physics modifications may bring convex inflationary potentials, thought to have been ruled out, back into the region of potentials allowed by data. Using the exponential inflation as an example of convex potentials, we find that the answer to both tests is positive: modifications to the perturbation spectrum and to the Newtonian potential of the universe originating from the quantum entanglement, bring the exponential potential, back within the allowed region of current data; and, the series of anomalies previously predicted in this theory, is still in good agreement with current data. Hence our finding for this convex potential comes at the price of allowing for additional thermal relic particles, equivalently dark radiation, in the early universe.
NASA Astrophysics Data System (ADS)
Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.
2010-12-01
Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.
Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models
NASA Astrophysics Data System (ADS)
Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei
2016-06-01
It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.
Modeling the reversible, diffusive sink effect in response to transient contaminant sources.
Zhao, D; Little, J C; Hodgson, A T
2002-09-01
A physically based diffusion model is used to evaluate the sink effect of diffusion-controlled indoor materials and to predict the transient contaminant concentration in indoor air in response to several time-varying contaminant sources. For simplicity, it is assumed the predominant indoor material is a homogeneous slab, initially free of contaminant, and the air within the room is well mixed. The model enables transient volatile organic compound (VOC) concentrations to be predicted based on the material/air partition coefficient (K) and the material-phase diffusion coefficient (D) of the sink. Model predictions are made for three scenarios, each mimicking a realistic situation in a building. Styrene, phenol, and naphthalene are used as representative VOCs. A styrene butadiene rubber (SBR) backed carpet, vinyl flooring (VF), and a polyurethane foam (PUF) carpet cushion are considered as typical indoor sinks. In scenarios involving a sinusoidal VOC input and a double exponential decaying input, the model predicts the sink has a modest impact for SBR/styrene, but the effect increases for VF/phenol and PUF/naphthalene. In contrast, for an episodic chemical spill, SBR is predicted to reduce the peak styrene concentration considerably. A parametric study reveals for systems involving a large equilibrium constant (K), the kinetic constant (D) will govern the shape of the resulting gasphase concentration profile. On the other hand, for systems with a relaxed mass transfer resistance, K will dominate the profile.
NASA Technical Reports Server (NTRS)
Sackmann, I.-Juliana; Boothroyd, Arnold I.
2001-01-01
The relatively warm temperatures required on early Earth and Mars have been difficult to account for with warming from greenhouse gases. A slightly more massive young Sun would be brighter than predicted by the standard solar model, simultaneously resolving this problem for both Earth and Mars. We computed high-precision solar models with seven initial masses, from Mi = 1.01 to 1.07 solar mass - the latter being the maximum permitted if the early Earth is not to lose its water via a moist greenhouse effect. The relatively modest early mass loss that is required remains consistent with observational limits on mass loss from young stars and with estimates of the past solar wind obtained from lunar rocks. We considered three types of mass loss rates: (1) a reasonable choice of a simple exponential decline, (2) an extreme step-function case that gives the maximum effect consistent with observations, and (3) the radical case of a linear decline which is inconsistent with the solar wind mass loss estimates from lunar rocks. Our computations demonstrated that mass loss leaves a fingerprint oil the Sun's internal structure large enough to be detectable with helioseismic observations. All of our mass-losing solar models were consistent with the helioseismic observations; in fact, our preferred mass-losing cases were in marginally better agreement with the helioseismology than the standard solar model was, although this difference was smaller than the effects of other uncertainties in the input physics and in the solar composition. Mass loss has only a relatively minor effect on the predicted lithium depletion; the major portion of the solar lithium depletion must still be due to rotational mixing. Thus the modest mass loss cases considered here cannot be ruled out by observed lithium depletions. For the three mass loss types considered, the preferred initial masses were 1.07 solar mass for the exponential case and 1.04 solar mass for the step-function and linear cases; all of these provided high enough solar fluxes at Mars 3.8 Gyr ago to be consistent with the existence of liquid water. For a more massive early Sun, the planets would have had to be closer to the young Sun in order to end up in their present orbits; the orbital radii of the planets would vary inversely with the solar mass. Both of these effects contribute to the fact that the early solar flux at the planets would have been considerably higher than that of the standard solar model at that time. In fact, the 1.07 solar mass exponential case has a flux at birth 5% higher than the present solar flux, while the radical 1.04 solar mass linear case has a nearly constant flux over the first 3 Gyr only about 10% lower than at present. The early solar evolution would be in the opposite direction in the H-R diagram to that of the standard Sun.
Anomalous T2 relaxation in normal and degraded cartilage.
Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G
2016-09-01
To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Modeling the degradation kinetics of ascorbic acid.
Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R
2018-06-13
Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.
A model for predicting thermal properties of asphalt mixtures from their constituents
NASA Astrophysics Data System (ADS)
Keller, Merlin; Roche, Alexis; Lavielle, Marc
Numerous theoretical and experimental approaches have been developed to predict the effective thermal conductivity of composite materials such as polymers, foams, epoxies, soils and concrete. None of such models have been applied to asphalt concrete. This study attempts to develop a model to predict the thermal conductivity of asphalt concrete from its constituents that will contribute to the asphalt industry by reducing costs and saving time on laboratory testing. The necessity to do the laboratory testing would be no longer required when a mix for the pavement is created with desired thermal properties at the design stage by selecting correct constituents. This thesis investigated six existing predictive models for applicability to asphalt mixtures, and four standard mathematical techniques were used to develop a regression model to predict the effective thermal conductivity. The effective thermal conductivities of 81 asphalt specimens were used as the response variables, and the thermal conductivities and volume fractions of their constituents were used as the predictors. The conducted statistical analyses showed that the measured values of thermal conductivities of the mixtures are affected by the bitumen and aggregate content, but not by the air content. Contrarily, the predicted data for some investigated models are highly sensitive to air voids, but not to bitumen and/or aggregate content. Additionally, the comparison of the experimental with analytical data showed that none of the existing models gave satisfactory results; on the other hand, two regression models (Exponential 1* and Linear 3*) are promising for asphalt concrete.
Comparative Analyses of Creep Models of a Solid Propellant
NASA Astrophysics Data System (ADS)
Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.
2018-05-01
The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.
Zhou, Jingwen; Xu, Zhenghong; Chen, Shouwen
2013-04-01
The thuringiensin abiotic degradation processes in aqueous solution under different conditions, with a pH range of 5.0-9.0 and a temperature range of 10-40°C, were systematically investigated by an exponential decay model and a radius basis function (RBF) neural network model, respectively. The half-lives of thuringiensin calculated by the exponential decay model ranged from 2.72 d to 16.19 d under the different conditions mentioned above. Furthermore, an RBF model with accuracy of 0.1 and SPREAD value 5 was employed to model the degradation processes. The results showed that the model could simulate and predict the degradation processes well. Both the half-lives and the prediction data showed that thuringiensin was an easily degradable antibiotic, which could be an important factor in the evaluation of its safety. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sodium 22+ washout from cultured rat cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kino, M.; Nakamura, A.; Hopp, L.
1986-10-01
The washout of Na/sup +/ isotopes from tissues and cells is quite complex and not well defined. To further gain insight into this process, we have studied /sup 22/Na/sup +/ washout from cultured Wistar rat skin fibroblasts and vascular smooth muscle cells (VSMCs). In these preparations, /sup 22/Na/sup +/ washout is described by a general three-exponential function. The exponential factor of the fastest component (k1) and the initial exchange rate constant (kie) of cultured fibroblasts decrease in magnitude in response to incubation in K+-deficient medium or in the presence of ouabain and increase in magnitude when the cells are incubatedmore » in a Ca++-deficient medium. As the magnitude of the kie declines (in the presence of ouabain) to the level of the exponential factor of the middle component (k2), /sup 22/Na/sup +/ washout is adequately described by a two-exponential function. When the kie is further diminished (in the presence of both ouabain and phloretin) to the range of the exponential factor of the slowest component (k3), the washout of /sup 22/Na/sup +/ is apparently monoexponential. Calculations of the cellular Na/sup +/ concentrations, based on the /sup 22/Na/sup +/ activity in the cells at the initiation of the washout experiments, and the medium specific activity agree with atomic absorption spectrometry measurements of the cellular concentration of this ion. Thus, all three components of /sup 22/Na/sup +/ washout from cultured rat cells are of cellular origin. Using the exponential parameters, compartmental analyses of two models (in parallel and in series) with three cellular Na/sup +/ pools were performed. The results indicate that, independent of the model chosen, the relative size of the largest Na+ pool is 92-93% in fibroblasts and approximately 96% in VSMCs. This pool is most likely to represent the cytosol.« less
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Statistics of Advective Stretching in Three-dimensional Incompressible Flows
NASA Astrophysics Data System (ADS)
Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.
2009-09-01
We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.
Markis, Flora; Baudez, Jean-Christophe; Parthasarathy, Rajarathinam; Slatter, Paul; Eshtiaghi, Nicky
2016-09-01
Predicting the flow behaviour, most notably, the apparent viscosity and yield stress of sludge mixtures inside the anaerobic digester is essential because it helps optimize the mixing system in digesters. This paper investigates the rheology of sludge mixtures as a function of digested sludge volume fraction. Sludge mixtures exhibited non-Newtonian, shear thinning, yield stress behaviour. The apparent viscosity and yield stress of sludge mixtures prepared at the same total solids concentration was influenced by the interactions within the digested sludge and increased with the volume fraction of digested sludge - highlighted using shear compliance and shear modulus of sludge mixtures. However, when a thickened primary - secondary sludge mixture was mixed with dilute digested sludge, the apparent viscosity and yield stress decreased with increasing the volume fraction of digested sludge. This was caused by the dilution effect leading to a reduction in the hydrodynamic and non-hydrodynamic interactions when dilute digested sludge was added. Correlations were developed to predict the apparent viscosity and yield stress of the mixtures as a function of the digested sludge volume fraction and total solids concentration of the mixtures. The parameters of correlations can be estimated using pH of sludge. The shear and complex modulus were also modelled and they followed an exponential relationship with increasing digested sludge volume fraction. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evidence for a scale-limited low-frequency earthquake source process
NASA Astrophysics Data System (ADS)
Chestler, S. R.; Creager, K. C.
2017-04-01
We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.
Water diffusion in silicate glasses: the effect of glass structure
NASA Astrophysics Data System (ADS)
Kuroda, M.; Tachibana, S.
2016-12-01
Water diffusion in silicate melts (glasses) is one of the main controlling factors of magmatism in a volcanic system. Water diffusivity in silicate glasses depends on its own concentration. However, the mechanism causing those dependences has not been fully understood yet. In order to construct a general model for water diffusion in various silicate glasses, we performed water diffusion experiments in silica glass and proposed a new water diffusion model [Kuroda et al., 2015]. In the model, water diffusivity is controlled by the concentration of both main diffusion species (i.e. molecular water) and diffusion pathways, which are determined by the concentrations of hydroxyl groups and network modifier cations. The model well explains the water diffusivity in various silicate glasses from silica glass to basalt glass. However, pre-exponential factors of water diffusivity in various glasses show five orders of magnitude variations although the pre-exponential factor should ideally represent the jump frequency and the jump distance of molecular water and show a much smaller variation. Here, we attribute the large variation of pre-exponential factors to a glass structure dependence of activation energy for molecular water diffusion. It has been known that the activation energy depends on the water concentration [Nowak and Behrens, 1997]. The concentration of hydroxyls, which cut Si-O-Si network in the glass structure, increases with water concentration, resulting in lowering the activation energy for water diffusion probably due to more fragmented structure. Network modifier cations are likely to play the same role as water. With taking the effect of glass structure into account, we found that the variation of pre-exponential factors of water diffusivity in silicate glasses can be much smaller than the five orders of magnitude, implying that the diffusion of molecular water in silicate glasses is controlled by the same atomic process.
Surface-water radon-222 distribution along the west-central Florida shelf
Smith, C.G.; Robbins, L.L.
2012-01-01
In February 2009 and August 2009, the spatial distribution of radon-222 in surface water was mapped along the west-central Florida shelf as collaboration between the Response of Florida Shelf Ecosystems to Climate Change project and a U.S. Geological Survey Mendenhall Research Fellowship project. This report summarizes the surface distribution of radon-222 from two cruises and evaluates potential physical controls on radon-222 fluxes. Radon-222 is an inert gas produced overwhelmingly in sediment and has a short half-life of 3.8 days; activities in surface water ranged between 30 and 170 becquerels per cubic meter. Overall, radon-222 activities were enriched in nearshore surface waters relative to offshore waters. Dilution in offshore waters is expected to be the cause of the low offshore activities. While thermal stratification of the water column during the August survey may explain higher radon-222 activities relative to the February survey, radon-222 activity and integrated surface-water inventories decreased exponentially from the shoreline during both cruises. By estimating radon-222 evasion by wind from nearby buoy data and accounting for internal production from dissolved radium-226, its radiogenic long-lived parent, a simple one-dimensional model was implemented to determine the role that offshore mixing, benthic influx, and decay have on the distribution of excess radon-222 inventories along the west Florida shelf. For multiple statistically based boundary condition scenarios (first quartile, median, third quartile, and maximum radon-222 inshore of 5 kilometers), the cross-shelf mixing rates and average nearshore submarine groundwater discharge (SGD) rates varied from 100.38 to 10-3.4 square kilometers per day and 0.00 to 1.70 centimeters per day, respectively. This dataset and modeling provide the first attempt to assess cross-shelf mixing and SGD on such a large spatial scale. Such estimates help scale up SGD rates that are often made at 1- to 10-meter resolution to a coarser but more regionally applicable scale of 1- to 10-kilometer resolution. More stringent analyses and model evaluation are required, but results and analyses presented in this report provide the foundation for conducting a more rigorous statistical assessment.
Mathematical modeling of drying of pretreated and untreated pumpkin.
Tunde-Akintunde, T Y; Ogunlakin, G O
2013-08-01
In this study, drying characteristics of pretreated and untreated pumpkin were examined in a hot-air dryer at air temperatures within a range of 40-80 °C and a constant air velocity of 1.5 m/s. The drying was observed to be in the falling-rate drying period and thus liquid diffusion is the main mechanism of moisture movement from the internal regions to the product surface. The experimental drying data for the pumpkin fruits were used to fit Exponential, General exponential, Logarithmic, Page, Midilli-Kucuk and Parabolic model and the statistical validity of models tested were determined by non-linear regression analysis. The Parabolic model had the highest R(2) and lowest χ(2) and RMSE values. This indicates that the Parabolic model is appropriate to describe the dehydration behavior for the pumpkin.
Cosmological models constructed by van der Waals fluid approximation and volumetric expansion
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Myrzakulov, R.
The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.
Ghatage, Dhairyasheel; Chatterji, Apratim
2013-10-01
We introduce a method to obtain steady-state uniaxial exponential-stretching flow of a fluid (akin to extensional flow) in the incompressible limit, which enables us to study the response of suspended macromolecules to the flow by computer simulations. The flow field in this flow is defined by v(x) = εx, where v(x) is the velocity of the fluid and ε is the stretch flow gradient. To eliminate the effect of confining boundaries, we produce the flow in a channel of uniform square cross section with periodic boundary conditions in directions perpendicular to the flow, but simultaneously maintain uniform density of fluid along the length of the tube. In experiments a perfect elongational flow is obtained only along the axis of symmetry in a four-roll geometry or a filament-stretching rheometer. We can reproduce flow conditions very similar to extensional flow near the axis of symmetry by exponential-stretching flow; we do this by adding the right amounts of fluid along the length of the flow in our simulations. The fluid particles added along the length of the tube are the same fluid particles which exit the channel due to the flow; thus mass conservation is maintained in our model by default. We also suggest a scheme for possible realization of exponential-stretching flow in experiments. To establish our method as a useful tool to study various soft matter systems in extensional flow, we embed (i) spherical colloids with excluded volume interactions (modeled by the Weeks-Chandler potential) as well as (ii) a bead-spring model of star polymers in the fluid to study their responses to the exponential-stretched flow and show that the responses of macromolecules in the two flows are very similar. We demonstrate that the variation of number density of the suspended colloids along the direction of flow is in tune with our expectations. We also conclude from our study of the deformation of star polymers with different numbers of arms f that the critical flow gradient ε(c) at which the star undergoes the coil-to-stretch transition is independent of f for f = 2,5,10, and 20.
Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts
NASA Technical Reports Server (NTRS)
Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.
2007-01-01
We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Krogh-cylinder and infinite-domain models for washout of an inert diffusible solute from tissue.
Secomb, Timothy W
2015-01-01
Models based on the Krogh-cylinder concept are developed to analyze the washout from tissue by blood flow of an inert diffusible solute that permeates blood vessel walls. During the late phase of washout, the outflowing solute concentration decays exponentially with time. This washout decay rate is predicted for a range of conditions. A single capillary is assumed to lie on the axis of a cylindrical tissue region. In the classic "Krogh-cylinder" approach, a no-flux boundary condition is applied on the outside of the cylinder. An alternative "infinite-domain" approach is proposed that allows for solute exchange across the boundary, but with zero net exchange. Both models are analyzed, using finite-element and analytical methods. The washout decay rate depends on blood flow rate, tissue diffusivity and vessel permeability of solute, and assumed boundary conditions. At low blood flow rates, the washout rate can exceed the value for a single well-mixed compartment. The infinite-domain approach predicts slower washout decay rates than the Krogh-cylinder approach. The infinite-domain approach overcomes a significant limitation of the Krogh-cylinder approach, while retaining its simplicity. It provides a basis for developing methods to deduce transport properties of inert solutes from observations of washout decay rates. © 2014 John Wiley & Sons Ltd.
Tests of the Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard; Attele, Rohan
2011-01-01
Satellite lightning imagers such as the NASA Tropical Rainfall Measuring Mission Lightning Imaging Sensor (TRMM/LIS) and the future GOES-R Geostationary Lightning Mapper (GLM) are designed to detect total lightning (ground flashes + cloud flashes). However, there is a desire to discriminate ground flashes from cloud flashes from the vantage point of space since this would enhance the overall information content of the satellite lightning data and likely improve its operational and scientific applications (e.g., in severe weather warning, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters (one of which is the ground flash fraction), a scalar function was minimized by a numerical method. In order to improve this optimization, a Grobner basis solution was introduced to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. In this study, we test the efficacy of the Grobner basis initialization using actual lightning imager measurements and ground flash truth derived from the national lightning network.
Quantum Loop Expansion to High Orders, Extended Borel Summation, and Comparison with Exact Results
NASA Astrophysics Data System (ADS)
Noreen, Amna; Olaussen, Kåre
2013-07-01
We compare predictions of the quantum loop expansion to (essentially) infinite orders with (essentially) exact results in a simple quantum mechanical model. We find that there are exponentially small corrections to the loop expansion, which cannot be explained by any obvious “instanton”-type corrections. It is not the mathematical occurrence of exponential corrections but their seeming lack of any physical origin which we find surprising and puzzling.
Non-exponential kinetics of unfolding under a constant force.
Bell, Samuel; Terentjev, Eugene M
2016-11-14
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
Non-exponential kinetics of unfolding under a constant force
NASA Astrophysics Data System (ADS)
Bell, Samuel; Terentjev, Eugene M.
2016-11-01
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haines, Brian M., E-mail: bmhaines@lanl.gov
2015-08-15
In this paper, we perform a series of high-resolution 3D simulations of an OMEGA-type inertial confinement fusion (ICF) capsule implosion with varying levels of initial long-wavelength asymmetries in order to establish the physical energy loss mechanism for observed yield degradation due to long-wavelength asymmetries in symcap (gas-filled capsule) implosions. These simulations demonstrate that, as the magnitude of the initial asymmetries is increased, shell kinetic energy is increasingly retained in the shell instead of being converted to fuel internal energy. This is caused by the displacement of fuel mass away from and shell material into the center of the implosion duemore » to complex vortical flows seeded by the long-wavelength asymmetries. These flows are not fully turbulent, but demonstrate mode coupling through non-linear instability development during shell stagnation and late-time shock interactions with the shell interface. We quantify this effect by defining a separation lengthscale between the fuel mass and internal energy and show that this is correlated with yield degradation. The yield degradation shows an exponential sensitivity to the RMS magnitude of the long-wavelength asymmetries. This strong dependence may explain the lack of repeatability frequently observed in OMEGA ICF experiments. In contrast to previously reported mechanisms for yield degradation due to turbulent instability growth, yield degradation is not correlated with mixing between shell and fuel material. Indeed, an integrated measure of mixing decreases with increasing initial asymmetry magnitude due to delayed shock interactions caused by growth of the long-wavelength asymmetries without a corresponding delay in disassembly.« less
Kim, Sangdan; Han, Suhee
2010-01-01
Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.
Asquith, William H.
2014-01-01
The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.
Anosov C-systems and random number generators
NASA Astrophysics Data System (ADS)
Savvidy, G. K.
2016-08-01
We further develop our previous proposal to use hyperbolic Anosov C-systems to generate pseudorandom numbers and to use them for efficient Monte Carlo calculations in high energy particle physics. All trajectories of hyperbolic dynamical systems are exponentially unstable, and C-systems therefore have mixing of all orders, a countable Lebesgue spectrum, and a positive Kolmogorov entropy. These exceptional ergodic properties follow from the C-condition introduced by Anosov. This condition defines a rich class of dynamical systems forming an open set in the space of all dynamical systems. An important property of C-systems is that they have a countable set of everywhere dense periodic trajectories and their density increases exponentially with entropy. Of special interest are the C-systems defined on higher-dimensional tori. Such C-systems are excellent candidates for generating pseudorandom numbers that can be used in Monte Carlo calculations. An efficient algorithm was recently constructed that allows generating long C-system trajectories very rapidly. These trajectories have good statistical properties and can be used for calculations in quantum chromodynamics and in high energy particle physics.
Global exponential stability of octonion-valued neural networks with leakage delay and mixed delays.
Popa, Călin-Adrian
2018-06-08
This paper discusses octonion-valued neural networks (OVNNs) with leakage delay, time-varying delays, and distributed delays, for which the states, weights, and activation functions belong to the normed division algebra of octonions. The octonion algebra is a nonassociative and noncommutative generalization of the complex and quaternion algebras, but does not belong to the category of Clifford algebras, which are associative. In order to avoid the nonassociativity of the octonion algebra and also the noncommutativity of the quaternion algebra, the Cayley-Dickson construction is used to decompose the OVNNs into 4 complex-valued systems. By using appropriate Lyapunov-Krasovskii functionals, with double and triple integral terms, the free weighting matrix method, and simple and double integral Jensen inequalities, delay-dependent criteria are established for the exponential stability of the considered OVNNs. The criteria are given in terms of complex-valued linear matrix inequalities, for two types of Lipschitz conditions which are assumed to be satisfied by the octonion-valued activation functions. Finally, two numerical examples illustrate the feasibility, effectiveness, and correctness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina
2010-01-01
Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
The use of models by ecologist and environmental managers, to inform environmental management and decision-making, has grown exponentially in the past 50 years. Due to logistical, economical and theoretical benefits, model users are frequently transferring preexisting models to n...
Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S
2014-11-01
Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than 15-20%. Histological fat fraction measurements in ten patients were highly correlated with calibrated MRI fat fraction measurements (Pearson correlation coefficient r = 0.90 with P = 0.0004). Liver MRI using multi-point Dixon with multi-fat-peak and bi-exponential T2* modeling provided accurate fat quantification in children and young adults with non-alcoholic fatty liver disease and may be used to screen at-risk or affected individuals and to monitor disease progress noninvasively.
NASA Astrophysics Data System (ADS)
Rossi, Stefano; Morgavi, Daniele; Vetere, Francesco; Petrelli, Maurizio; Perugini, Diego
2017-04-01
keywords: Magma mixing, chaotic dynamics, time series experiments Magma mixing is a petrologic phenomenon which is recognized as potential trigger of highly explosive eruptions and its evidence is commonly observable in natural rocks. Here we tried to replicate the dynamic conditions of mixing performing a set of chaotic mixing experiments between shoshonitic and rhyolitic magmas from Vulcano island. Vulcano is the southernmost island of the Aeolian Archipelago (Aeolian Islands, Italy); it is completely built by volcanic rocks with variable degree of evolution ranging from basalt to rhyolite (e.g. Keller 1980; Ellam et al. 1988; De Astis 1995; De Astis et al. 2013) and its magmatic activity dates back to about 120 ky. Last eruption occurred in 1888-1890. The chaotic mixing experiments were performed by using the new ChaOtic Magma Mixing Apparatus (COMMA), held at the Department of Physics and Geology, University of Perugia. This new experimental device allows to track the evolution of the mixing process and the associated modulation of chemical composition between different magmas. Experiments were performed at 1200°C and atmospheric pressure with a viscosity ratio higher than three orders of magnitude. The experimental protocol was chosen to ensure the occurrence of chaotic dynamics in the system and the run duration was progressively increased (e.g. 10.5 h, 21 h, 42 h). The products of each experiment are crystal-free glasses in which the variation of major elements was investigated along different profiles using electron microprobe (EMPA) at Institute für Mineralogie, Leibniz Universität of Hannover (Germany). The efficiency of the mixing process is estimated by calculating the decrease of concentration variance in time and it is shown that the variance of major elements exponentially decays. Our results confirm and quantify how different chemical elements homogenize in the melt at differing rates. It is also observable that the mixing structures generated during the mixing experiments are topologically identical to those observed in natural mixed volcanic rocks.
Ouyang, Wenjun; Subotnik, Joseph E
2017-05-07
Using the Anderson-Holstein model, we investigate charge transfer dynamics between a molecule and a metal surface for two extreme cases. (i) With a large barrier, we show that the dynamics follow a single exponential decay as expected; (ii) without any barrier, we show that the dynamics are more complicated. On the one hand, if the metal-molecule coupling is small, single exponential dynamics persist. On the other hand, when the coupling between the metal and the molecule is large, the dynamics follow a biexponential decay. We analyze the dynamics using the Smoluchowski equation, develop a simple model, and explore the consequences of biexponential dynamics for a hypothetical cyclic voltammetry experiment.
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
NASA Astrophysics Data System (ADS)
Li, Zhe; Xiao, Yan; Yang, Jixiang; Li, Chao; Gao, Xia; Guo, Jinsong
2017-11-01
Turbulent mixing, in particular on a small scale, affects the growth of microalgae by changing diffusive sublayers and regulating nutrient fluxes of cells. We tested the nutrient flux hypothesis by evaluating the cellular stoichiometry and phosphorus storage of microalgae under different turbulent mixing conditions. Aphanizomenon flos-aquae were cultivated in different stirring batch reactors with turbulent dissipation rates ranging from 0.001 51 m2/s3 to 0.050 58 m2/s3, the latter being the highest range observed in natural aquatic systems. Samples were taken in the exponential growth phase and compared with samples taken when the reactor was completely stagnant. Results indicate that, within a certain range, turbulent mixing stimulates the growth of A. flos-aquae. An inhibitory effect on growth rate was observed at the higher range. Photosynthesis activity, in terms of maximum effective quantum yield of PSII (the ratio of F v/ F m) and cellular chlorophyll a, did not change significantly in response to turbulence. However, Chl a/C mass ratio and C/N molar ratio, showed a unimodal response under a gradient of turbulent mixing, similar to growth rate. Moreover, we found that increases in turbulent mixing might stimulate respiration rates, which might lead to the use of polyphosphate for the synthesis of cellular constituents. More research is required to test and verify the hypothesis that turbulent mixing changes the diffusive sublayer, regulating the nutrient flux of cells.
NASA Astrophysics Data System (ADS)
Schneider, Markus P. A.
This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.
Effective equilibrium picture in the x y model with exponentially correlated noise
NASA Astrophysics Data System (ADS)
Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio
2018-02-01
We study the effect of exponentially correlated noise on the x y model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ , indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.
Effective equilibrium picture in the xy model with exponentially correlated noise.
Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio
2018-02-01
We study the effect of exponentially correlated noise on the xy model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ, indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.
Predicting the apparent viscosity and yield stress of digested and secondary sludge mixtures.
Eshtiaghi, Nicky; Markis, Flora; Zain, Dwen; Mai, Kiet Hung
2016-05-15
The legal banning of conventional sludge disposal methods such as landfill has led to a global movement towards achieving a sustainable sludge management strategy. Reusing sludge for energy production (biogas production) through the anaerobic digestion of sludge can provide a sustainable solution. However, for the optimum performance of digesters with minimal use of energy input, operating conditions must be regulated in accordance with the rheological characteristics of the sludge. If it is assumed that only secondary sludge enters the anaerobic digesters, an impact of variations to the solids concentration and volume fraction of each sludge type must be investigated to understand how the apparent viscosity and yield stress of the secondary and digested sludge mixture inside the digesters changes. In this study, five different total solids concentration of secondary and digested sludge were mixed at different digested sludge volume fractions ranging from 0 to 1. It was found that if secondary sludge was mixed with digested sludge at the same total solids concentration, the apparent viscosity and the yield stress of the mixture increased exponentially by increasing the volume fraction of digested sludge. However, if secondary sludge was added to digested sludge with a different solids concentration, the apparent viscosity and yield stress of the resulting mixed sludge was controlled by the concentrated sludge regardless of its type. Semi - empirical correlations were proposed to predict the apparent viscosity and yield stress of the mixed digested and secondary sludge. A master curve was also developed to predict the flow behaviour of sludge mixtures regardless of the total solid concentration and volume fraction of each sludge type within the studied solids concentration range of 1.4 and 7%TS. This model can be used for digesters optimization and design by predicting the rheology of sludge mixture inside digester. Copyright © 2016 Elsevier Ltd. All rights reserved.
Proportional Feedback Control of Energy Intake During Obesity Pharmacotherapy.
Hall, Kevin D; Sanghvi, Arjun; Göbel, Britta
2017-12-01
Obesity pharmacotherapies result in an exponential time course for energy intake whereby large early decreases dissipate over time. This pattern of declining drug efficacy to decrease energy intake results in a weight loss plateau within approximately 1 year. This study aimed to elucidate the physiology underlying the exponential decay of drug effects on energy intake. Placebo-subtracted energy intake time courses were examined during long-term obesity pharmacotherapy trials for 14 different drugs or drug combinations within the theoretical framework of a proportional feedback control system regulating human body weight. Assuming each obesity drug had a relatively constant effect on average energy intake and did not affect other model parameters, our model correctly predicted that long-term placebo-subtracted energy intake was linearly related to early reductions in energy intake according to a prespecified equation with no free parameters. The simple model explained about 70% of the variance between drug studies with respect to the long-term effects on energy intake, although a significant proportional bias was evident. The exponential decay over time of obesity pharmacotherapies to suppress energy intake can be interpreted as a relatively constant effect of each drug superimposed on a physiological feedback control system regulating body weight. © 2017 The Obesity Society.
Prize to a Faculty Member for Research in an Undergraduate: Chaotic mixing and front propagation
NASA Astrophysics Data System (ADS)
Solomon, Tom
2014-03-01
We present results from a series of experiments - all done with undergraduate students - on chaotic fluid mixing and the effects of fluid flows on the behavior of reaction systems. Simple, well-ordered laminar fluid flows can give rise to fluid mixing with complexity far beyond that of the underlying flow, with tracers that separate exponentially in time and invariant manifolds that act as barriers to transport. Recently, we have studied how fluid mixing affects the propagation of reaction fronts in a flow. This is an issue with applications to a wide range of systems including microfluidic chemical reactors, blooms of phytoplankton in the oceans, and the spreading of a disease in a moving population. To analyze and predict the behavior of the fronts, we generalize tools developed to describe passive mixing. In particular, the concept of an invariant manifold is expanded to account for reactive burning. ``Burning invariant manifolds'' (BIMs) are predicted and measured experimentally as structures in the flow that act as one-way barriers that block the motion of reaction fronts. We test these ideas experimentally in three fluid flows: (a) and chain of alternating vortices; (b) an extended, spatially-random pattern of vortices; and (c) a time-independent, three-dimensional, nested vortex flow. The reaction fronts are produced chemically with variations of the well-known Belousov-Zhabotinsky reaction. Supported by Research Corporation and the National Science Foundation.
The Mass-dependent Star Formation Histories of Disk Galaxies: Infall Model Versus Observations
NASA Astrophysics Data System (ADS)
Chang, R. X.; Hou, J. L.; Shen, S. Y.; Shu, C. G.
2010-10-01
We introduce a simple model to explore the star formation histories of disk galaxies. We assume that the disk originate and grows by continuous gas infall. The gas infall rate is parameterized by the Gaussian formula with one free parameter: the infall-peak time tp . The Kennicutt star formation law is adopted to describe how much cold gas turns into stars. The gas outflow process is also considered in our model. We find that, at a given galactic stellar mass M *, the model adopting a late infall-peak time tp results in blue colors, low-metallicity, high specific star formation rate (SFR), and high gas fraction, while the gas outflow rate mainly influences the gas-phase metallicity and star formation efficiency mainly influences the gas fraction. Motivated by the local observed scaling relations, we "construct" a mass-dependent model by assuming that the low-mass galaxy has a later infall-peak time tp and a larger gas outflow rate than massive systems. It is shown that this model can be in agreement with not only the local observations, but also with the observed correlations between specific SFR and galactic stellar mass SFR/M * ~ M * at intermediate redshifts z < 1. Comparison between the Gaussian-infall model and the exponential-infall model is also presented. It shows that the exponential-infall model predicts a higher SFR at early stage and a lower SFR later than that of Gaussian infall. Our results suggest that the Gaussian infall rate may be more reasonable in describing the gas cooling process than the exponential infall rate, especially for low-mass systems.
NASA Astrophysics Data System (ADS)
Al Mashwood, Abdullah; Predoi-Cross, Adriana; Devi, V. Malathy; Rozario, Hoimonti; Billinghurst, Brant
2018-06-01
Pure CO2 spectra recorded at room temperature and different pressures (0.2-140 Torr) have been analyzed with the help of a fitting routine that takes into account asymmetries arising in the spectral lines due to pressure induced effects such as line mixing. The fitting procedure used in this study allows one to adjust the ro-vibrational constants for the band rather than fitting for individual line parameters. These constrained parameters greatly reduce the measurement uncertainties and allow us to observe the behavior of the weak lines corresponding to high J quantum numbers. We have also calculated line mixing parameters using approximations based on exponential nature of the energy difference between ground and upper vibrational states involved in the ro-vibrational band transitions. The calculated results show good agreement when compared with the experimentally determined parameters.
An adaptive front tracking technique for three-dimensional transient flows
NASA Astrophysics Data System (ADS)
Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.
2000-01-01
An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
Henry, S M; Grbić-Galić, D
1991-01-01
Trichloroethylene (TCE)-transforming aquifer methanotrophs were evaluated for the influence of TCE oxidation toxicity and the effect of reductant availability on TCE transformation rates during methane starvation. TCE oxidation at relatively low (6 mg liter-1) TCE concentrations significantly reduced subsequent methane utilization in mixed and pure cultures tested and reduced the number of viable cells in the pure culture Methylomonas sp. strain MM2 by an order of magnitude. Perchloroethylene, tested at the same concentration, had no effect on the cultures. Neither the TCE itself nor the aqueous intermediates were responsible for the toxic effect, and it is suggested that TCE oxidation toxicity may have resulted from reactive intermediates that attacked cellular macromolecules. During starvation, all methanotrophs tested exhibited a decline in TCE transformation rates, and this decline followed exponential decay. Formate, provided as an exogenous electron donor, increased TCE transformation rates in Methylomonas sp. strain MM2, but not in mixed culture MM1 or unidentified isolate, CSC-1. Mixed culture MM2 did not transform TCE after 15 h of starvation, but mixed cultures MM1 and MM3 did. The methanotrophs in mixed cultures MM1 and MM3, and the unidentified isolate CSC-1 that was isolated from mixed culture MM1 contained lipid inclusions, whereas the methanotrophs of mixed culture MM2 and Methylomonas sp. strain MM2 did not. It is proposed that lipid storage granules serve as an endogenous source of electrons for TCE oxidation during methane starvation. Images PMID:2036010
On the nature of dissipative Timoshenko systems at light of the second spectrum of frequency
NASA Astrophysics Data System (ADS)
Almeida Júnior, D. S.; Ramos, A. J. A.
2017-12-01
In the present work, we prove that there exists a relation between a physical inconsistence known as second spectrum of frequency or non-physical spectrum and the exponential decay of a dissipative Timoshenko system where the damping mechanism acts on angle rotation. The so-called second spectrum is addressed into stabilization scenario and, in particular, we show that the second spectrum of the classical Timoshenko model can be truncated by taking a damping mechanism. Also, we show that dissipative Timoshenko type systems which are free of the second spectrum [based on important physical and historical observations made by Elishakoff (Advances mathematical modeling and experimental methods for materials and structures, solid mechanics and its applications, Springer, Berlin, pp 249-254, 2010), Elishakoff et al. (ASME Am Soc Mech Eng Appl Mech Rev 67(6):1-11 2015) and Elishakoff et al. (Int J Solids Struct 109:143-151, 2017)] are exponential stable for any values of the coefficients of system. In this direction, we provide physical explanations why weakly dissipative Timoshenko systems decay exponentially according to equality between velocity of wave propagation as proved in pioneering works by Soufyane (C R Acad Sci 328(8):731-734, 1999) and also by Muñoz Rivera and Racke (Discrete Contin Dyn Syst B 9:1625-1639, 2003). Therefore, the second spectrum of the classical Timoshenko beam model plays an important role in explaining some results on exponential decay and our investigations suggest to pay attention to the eventual consequences of this spectrum on stabilization setting for dissipative Timoshenko type systems.
Speranza, B; Bevilacqua, A; Mastromatteo, M; Sinigaglia, M; Corbo, M R
2010-08-01
The objective of the current study was to examine the interactions between Pseudomonas putida and Escherichia coli O157:H7 in coculture studies on fish-burgers packed in air and under different modified atmospheres (30 : 40 : 30 O(2) : CO(2) : N(2), 5 : 95 O(2) : CO(2) and 50 : 50 O(2) : CO(2)), throughout the storage at 8 degrees C. The lag-exponential model was applied to describe the microbial growth. To give a quantitative measure of the occurring microbial interactions, two simple parameters were developed: the combined interaction index (CII) and the partial interaction index (PII). Under air, the interaction was significant (P < 0.05) only within the exponential growth phase (CII, 1.72), whereas under the modified atmospheres, the interactions were highly significant (P < 0.001) and occurred both in the exponential and in the stationary phase (CII ranged from 0.33 to 1.18). PII values for E. coli O157:H7 were lower than those calculated for Ps. putida. The interactions occurring into the system affected both E. coli O157:H7 and pseudomonads subpopulations. The packaging atmosphere resulted in a key element. The article provides some useful information on the interactions occurring between E. coli O157:H7 and Ps. putida on fish-burgers. The proposed index describes successfully the competitive growth of both micro-organisms, giving also a quantitative measure of a qualitative phenomenon.
NASA Astrophysics Data System (ADS)
Brown, J. S.; Shaheen, S. E.
2018-04-01
Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.
Brown, J S; Shaheen, S E
2018-04-04
Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.