A demographic study of the exponential distribution applied to uneven-aged forests
Jeffrey H. Gove
2016-01-01
A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...
Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales
NASA Astrophysics Data System (ADS)
Laherrère, J.; Sornette, D.
1998-04-01
To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.
The size distribution of Pacific Seamounts
NASA Astrophysics Data System (ADS)
Smith, Deborah K.; Jordan, Thomas H.
1987-11-01
An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.
Statistical analyses support power law distributions found in neuronal avalanches.
Klaus, Andreas; Yu, Shan; Plenz, Dietmar
2011-01-01
The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
Rank distributions are collections of positive sizes ordered either increasingly or decreasingly. Many decreasing rank distributions, formed by the collective collaboration of human actions, follow an inverse power-law relation between ranks and sizes. This remarkable empirical fact is termed Zipf’s law, and one of its quintessential manifestations is the demography of human settlements — which exhibits a harmonic relation between ranks and sizes. In this paper we present a comprehensive statistical-physics analysis of rank distributions, establish that power-law and exponential rank distributions stand out as optimal in various entropy-based senses, and unveil the special role of the harmonic relation betweenmore » ranks and sizes. Our results extend the contemporary entropy-maximization view of Zipf’s law to a broader, panoramic, Gibbsian perspective of increasing and decreasing power-law and exponential rank distributions — of which Zipf’s law is one out of four pillars.« less
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
NASA Astrophysics Data System (ADS)
Nerantzaki, Sofia; Papalexiou, Simon Michael
2017-04-01
Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
The Lunar Rock Size Frequency Distribution from Diviner Infrared Measurements
NASA Astrophysics Data System (ADS)
Elder, C. M.; Hayne, P. O.; Piqueux, S.; Bandfield, J.; Williams, J. P.; Ghent, R. R.; Paige, D. A.
2016-12-01
Knowledge of the rock size frequency distribution on a planetary body is important for understanding its geologic history and for selecting landing sites. The rock size frequency distribution can be estimated by counting rocks in high resolution images, but most bodies in the solar system have limited areas with adequate coverage. We propose an alternative method to derive and map rock size frequency distributions using multispectral thermal infrared data acquired at multiple times during the night. We demonstrate this new technique for the Moon using data from the Lunar Reconnaissance Orbiter (LRO) Diviner radiometer in conjunction with three dimensional thermal modeling, leveraging the differential cooling rates of different rock sizes. We assume an exponential rock size frequency distribution, which has been shown to yield a good fit to rock populations in various locations on the Moon, Mars, and Earth [2, 3] and solve for the best radiance fits as a function of local time and wavelength. This method presents several advantages: 1) unlike other thermally derived rock abundance techniques, it is sensitive to rocks smaller than the diurnal skin depth; 2) it does not result in apparent decrease in rock abundance at night; and 3) it can be validated using images taken at the lunar surface. This method yields both the fraction of the surface covered in rocks of all sizes and the exponential factor, which defines the rate of drop-off in the exponential function at large rock sizes. We will present maps of both these parameters for the Moon, and provide a geological interpretation. In particular, this method reveals rocks in the lunar highlands that are smaller than previous thermal methods could detect. [1] Bandfield J. L. et al. (2011) JGR, 116, E00H02. [2] Golombek and Rapp (1997) JGR, 102, E2, 4117-4129. [3] Cintala, M.J. and K.M. McBride (1995) NASA Technical Memorandum 104804.
2017-01-01
Cell size distribution is highly reproducible, whereas the size of individual cells often varies greatly within a tissue. This is obvious in a population of Arabidopsis thaliana leaf epidermal cells, which ranged from 1,000 to 10,000 μm2 in size. Endoreduplication is a specialized cell cycle in which nuclear genome size (ploidy) is doubled in the absence of cell division. Although epidermal cells require endoreduplication to enhance cellular expansion, the issue of whether this mechanism is sufficient for explaining cell size distribution remains unclear due to a lack of quantitative understanding linking the occurrence of endoreduplication with cell size diversity. Here, we addressed this question by quantitatively summarizing ploidy profile and cell size distribution using a simple theoretical framework. We first found that endoreduplication dynamics is a Poisson process through cellular maturation. This finding allowed us to construct a mathematical model to predict the time evolution of a ploidy profile with a single rate constant for endoreduplication occurrence in a given time. We reproduced experimentally measured ploidy profile in both wild-type leaf tissue and endoreduplication-related mutants with this analytical solution, further demonstrating the probabilistic property of endoreduplication. We next extended the mathematical model by incorporating the element that cell size is determined according to ploidy level to examine cell size distribution. This analysis revealed that cell size is exponentially enlarged 1.5 times every endoreduplication round. Because this theoretical simulation successfully recapitulated experimentally observed cell size distributions, we concluded that Poissonian endoreduplication dynamics and exponential size-boosting are the sources of the broad cell size distribution in epidermal tissue. More generally, this study contributes to a quantitative understanding whereby stochastic dynamics generate steady-state biological heterogeneity. PMID:28926847
Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements
NASA Astrophysics Data System (ADS)
Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.
2014-11-01
We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.
Recurrence time statistics for finite size intervals
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.
2004-12-01
We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.
Droplet size and velocity distributions for spray modelling
NASA Astrophysics Data System (ADS)
Jones, D. P.; Watkins, A. P.
2012-01-01
Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.
The Italian primary school-size distribution and the city-size: a complex nexus
NASA Astrophysics Data System (ADS)
Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.
2014-06-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.
Preferential attachment and growth dynamics in complex systems
NASA Astrophysics Data System (ADS)
Yamasaki, Kazuko; Matia, Kaushik; Buldyrev, Sergey V.; Fu, Dongfeng; Pammolli, Fabio; Riccaboni, Massimo; Stanley, H. Eugene
2006-09-01
Complex systems can be characterized by classes of equivalency of their elements defined according to system specific rules. We propose a generalized preferential attachment model to describe the class size distribution. The model postulates preferential growth of the existing classes and the steady influx of new classes. According to the model, the distribution changes from a pure exponential form for zero influx of new classes to a power law with an exponential cut-off form when the influx of new classes is substantial. Predictions of the model are tested through the analysis of a unique industrial database, which covers both elementary units (products) and classes (markets, firms) in a given industry (pharmaceuticals), covering the entire size distribution. The model’s predictions are in good agreement with the data. The paper sheds light on the emergence of the exponent τ≈2 observed as a universal feature of many biological, social and economic problems.
1994-01-01
Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085
Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve
2009-09-08
We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.
Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah
2014-01-01
This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S
2016-11-01
For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Alves, S. G.; Martins, M. L.
2010-09-01
Aggregation of animal cells in culture comprises a series of motility, collision and adhesion processes of basic relevance for tissue engineering, bioseparations, oncology research and in vitro drug testing. In the present paper, a cluster-cluster aggregation model with stochastic particle replication and chemotactically driven motility is investigated as a model for the growth of animal cells in culture. The focus is on the scaling laws governing the aggregation kinetics. Our simulations reveal that in the absence of chemotaxy the mean cluster size and the total number of clusters scale in time as stretched exponentials dependent on the particle replication rate. Also, the dynamical cluster size distribution functions are represented by a scaling relation in which the scaling function involves a stretched exponential of the time. The introduction of chemoattraction among the particles leads to distribution functions decaying as power laws with exponents that decrease in time. The fractal dimensions and size distributions of the simulated clusters are qualitatively discussed in terms of those determined experimentally for several normal and tumoral cell lines growing in culture. It is shown that particle replication and chemotaxy account for the simplest cluster size distributions of cellular aggregates observed in culture.
Size and DNA distributions of electrophoretically separated cultured human kidney cells
NASA Technical Reports Server (NTRS)
Kunze, M. E.; Plank, L. D.; Todd, P. W.
1985-01-01
Electrophoretic purification of purifying cultured cells according to function presumes that the size of cycle phase of a cell is not an overriding determinant of its electrophoretic velocity in an electrophoretic separator. The size distributions and DNA distributions of fractions of cells purified by density gradient electrophoresis were determined. No systematic dependence of electrophoretic migration upward in a density gradient column upon either size or DNA content were found. It was found that human leukemia cell populations, which are more uniform function and found in all phases of the cell cycle during exponential growth, separated on a vertical sensity gradient electrophoresis column according to their size, which is shown to be strictly cell cycle dependent.
Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment
NASA Astrophysics Data System (ADS)
Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok
2018-05-01
We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
The Italian primary school-size distribution and the city-size: a complex nexus
Belmonte, Alessandro; Di Clemente, Riccardo; Buldyrev, Sergey V.
2014-01-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features. PMID:24954714
NASA Astrophysics Data System (ADS)
Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.
2017-04-01
Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.
NASA Technical Reports Server (NTRS)
Golombeck, M.; Rapp, D.
1996-01-01
The size-frequency distribution of rocks and the Vicking landing sites and a variety of rocky locations on the Earth that formed from a number of geologic processes all have the general shape of simple exponential curves, which have been combined with remote sensing data and models on rock abundance to predict the frequency of boulders potentially hazardous to future Mars landers and rovers.
Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows
NASA Technical Reports Server (NTRS)
McKenzie, D.; Savage, S.
2011-01-01
The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
Research on the exponential growth effect on network topology: Theoretical and empirical analysis
NASA Astrophysics Data System (ADS)
Li, Shouwei; You, Zongjun
Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.
Frequency Selection for Multi-frequency Acoustic Measurement of Suspended Sediment
NASA Astrophysics Data System (ADS)
Chen, X.; HO, H.; Fu, X.
2017-12-01
Multi-frequency acoustic measurement of suspended sediment has found successful applications in marine and fluvial environments. Difficult challenges remain in regard to improving its effectiveness and efficiency when applied to high concentrations and wide size distributions in rivers. We performed a multi-frequency acoustic scattering experiment in a cylindrical tank with a suspension of natural sands. The sands range from 50 to 600 μm in diameter with a lognormal size distribution. The bulk concentration of suspended sediment varied from 1.0 to 12.0 g/L. We found that the commonly used linear relationship between the intensity of acoustic backscatter and suspended sediment concentration holds only at sufficiently low concentrations, for instance below 3.0 g/L. It fails at a critical value of concentration that depends on measurement frequency and the distance between the transducer and the target point. Instead, an exponential relationship was found to work satisfactorily throughout the entire range of concentration. The coefficient and exponent of the exponential function changed, however, with the measuring frequency and distance. Considering the increased complexity of inverting the concentration values when an exponential relationship prevails, we further analyzed the relationship between measurement error and measuring frequency. It was also found that the inversion error may be effectively controlled within 5% if the frequency is properly set. Compared with concentration, grain size was found to heavily affect the selection of optimum frequency. A regression relationship for optimum frequency versus grain size was developed based on the experimental results.
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang
2014-08-25
The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.
1996-05-01
introduced as shown in Fig. 3. Pump ~ y { ~ ~ ~ = ~ messages ACK buffer Fig. 3. The Basic Pump The basic Pump [6] places a buffer (size n ) between...exponential distribution with mean x. Define Q = fY(MAll - T,) + k . ( N - Fair size) where N is the number of messages in buffer, at the time the...message is placed in buffer,,, and k . ( N - Fair size) is a feedback term. Both k and Fair size can be chosen by a sys- tem designer. Note that the moving
K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents
ERIC Educational Resources Information Center
Gwanyama, Philip Wagala
2005-01-01
The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…
NASA Astrophysics Data System (ADS)
Carlson, William D.
1989-09-01
The spatial disposition, compositional zoning profiles, and size distributions of garnet crystals in 11 specimens of pelitic schist from the Picuris Range of New Mexico (USA) demonstrate that the kinetics of intergranular diffusion controlled the nucleation and growth mechanisms of porphyroblasts in these rocks. An ordered disposition of garnet centers and a significant correlation between crystal radius and near-neighbor distances manifest suppressed nucleation of new crystals in diffusionally depleted zones surrounding pre-existing crystals. Compositional zoning profiles require diffusionally controlled growth, the rate of which increases exponentially as temperature increases with time; an acceleration factor for growth rate can be estimated from a comparison of compositional profiles for crystals of different sizes in each specimen. Crystal size distributions are interpreted as the result of nucleation rates that accelerate exponentially with increasing temperature early in the crystallization process, but decline in the later stages because of suppression effects in the vicinity of earlier-formed nuclei. Simulations of porphyroblast crystallization, based upon thermally accelerated diffusionally influenced nucleation kinetics and diffusionally controlled growth kinetics, quantitatively replicate textural relations in the rocks. The simulations employ only two variable parameters, which are evaluated by fitting of crystal size distributions. Both have physical significance. The first is an acceleration factor for nucleation, with a magnitude reflecting the prograde increase during the nucleation interval of the chemical affinity for the reaction in undepleted regions of the rock. The second is a measure of the relative sizes of the porphyroblast and the diffusionally depleted zone surrounding it. Crystal size distributions for the Picuris Range garnets correspond very closely to those in the literature from a variety of other localities for garnet and other minerals. The same kinetic model accounts quantitatively for crystal size distributions of porphyroblastic garnet, phlogopite, sphene, and pyroxene in rocks from both regional and contact metamorphic occurrences. These commonalities indicate that intergranular diffusion may be the dominant kinetic factor in the crystallization of porphyroblasts in a wide variety of metamorphic environments.
The mathematical relationship between Zipf’s law and the hierarchical scaling law
NASA Astrophysics Data System (ADS)
Chen, Yanguang
2012-06-01
The empirical studies of city-size distribution show that Zipf's law and the hierarchical scaling law are linked in many ways. The rank-size scaling and hierarchical scaling seem to be two different sides of the same coin, but their relationship has never been revealed by strict mathematical proof. In this paper, the Zipf's distribution of cities is abstracted as a q-sequence. Based on this sequence, a self-similar hierarchy consisting of many levels is defined and the numbers of cities in different levels form a geometric sequence. An exponential distribution of the average size of cities is derived from the hierarchy. Thus we have two exponential functions, from which follows a hierarchical scaling equation. The results can be statistically verified by simple mathematical experiments and observational data of cities. A theoretical foundation is then laid for the conversion from Zipf's law to the hierarchical scaling law, and the latter can show more information about city development than the former. Moreover, the self-similar hierarchy provides a new perspective for studying networks of cities as complex systems. A series of mathematical rules applied to cities such as the allometric growth law, the 2n principle and Pareto's law can be associated with one another by the hierarchical organization.
NASA Astrophysics Data System (ADS)
Yang, Jiefan; Lei, Hengchi
2016-02-01
Cloud microphysical properties of a mixed phase cloud generated by a typical extratropical cyclone in the Tongliao area, Inner Mongolia on 3 May 2014, are analyzed primarily using in situ flight observation data. This study is mainly focused on ice crystal concentration, supercooled cloud water content, and vertical distributions of fit parameters of snow particle size distributions (PSDs). The results showed several discrepancies of microphysical properties obtained during two penetrations. During penetration within precipitating cloud, the maximum ice particle concentration, liquid water content, and ice water content were increased by a factor of 2-3 compared with their counterpart obtained during penetration of a nonprecipitating cloud. The heavy rimed and irregular ice crystals obtained by 2D imagery probe as well as vertical distributions of fitting parameters within precipitating cloud show that the ice particles grow during falling via riming and aggregation process, whereas the lightly rimed and pristine ice particles as well as fitting parameters within non-precipitating cloud indicate the domination of sublimation process. During the two cloud penetrations, the PSDs were generally better represented by gamma distributions than the exponential form in terms of the determining coefficient ( R 2). The correlations between parameters of exponential /gamma form within two penetrations showed no obvious differences compared with previous studies.
Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick
2012-01-01
Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.
Self-organized Segregation on the Grid
NASA Astrophysics Data System (ADS)
Omidvar, Hamed; Franceschetti, Massimo
2018-02-01
We consider an agent-based model with exponentially distributed waiting times in which two types of agents interact locally over a graph, and based on this interaction and on the value of a common intolerance threshold τ , decide whether to change their types. This is equivalent to a zero-temperature ising model with Glauber dynamics, an asynchronous cellular automaton with extended Moore neighborhoods, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. Some rigorous results were recently obtained in the theoretical computer science literature, and this work provides several extensions. We enlarge the intolerance interval leading to the expected formation of large segregated regions of agents of a single type from the known size ɛ >0 to size ≈ 0.134. Namely, we show that for 0.433< τ < 1/2 (and by symmetry 1/2<τ <0.567), the expected size of the largest segregated region containing an arbitrary agent is exponential in the size of the neighborhood. We further extend the interval leading to expected large segregated regions to size ≈ 0.312 considering "almost segregated" regions, namely regions where the ratio of the number of agents of one type and the number of agents of the other type vanishes quickly as the size of the neighborhood grows. In this case, we show that for 0.344 < τ ≤ 0.433 (and by symmetry for 0.567 ≤ τ <0.656) the expected size of the largest almost segregated region containing an arbitrary agent is exponential in the size of the neighborhood. This behavior is reminiscent of supercritical percolation, where small clusters of empty sites can be observed within any sufficiently large region of the occupied percolation cluster. The exponential bounds that we provide also imply that complete segregation, where agents of a single type cover the whole grid, does not occur with high probability for p=1/2 and the range of intolerance considered.
Magnetic pattern at supergranulation scale: the void size distribution
NASA Astrophysics Data System (ADS)
Berrilli, F.; Scardigli, S.; Del Moro, D.
2014-08-01
The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
Study the fragment size distribution in dynamic fragmentation of laser shock loding tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Chu, Genbai; Shui, Min; Xi, Tao; Zhao, Yongqiang; Gu, Yuqiu
2017-06-01
Characterizing the distribution of fragment size produced from dynamic fragmentation process is very important for fundamental science like predicting material dymanic response performance and for a variety of engineering applications. However, only a few data about fragment mass or size have been obtained due to its great challenge in its dynamic measurement. This paper would focus on investigating the fragment size distribution from the dynamic fragmentation of laser shock-loaded metal. Material ejection of tin sample with wedge shape groove in the free surface is collected with soft recovery technique. Via fine post-shot analysis techniques including X-ray micro-tomography and the improved watershed method, it is found that fragments can be well detected. To characterize their size distributions, a random geometric statistics method based on Poisson mixtures was derived for dynamic heterogeneous fragmentation problem, which leads to a linear combinational exponential distribution. Finally we examined the size distribution of laser shock-loaded tin with the derived model, and provided comparisons with other state-of-art models. The resulting comparisons prove that our proposed model can provide more reasonable fitting result for laser shock-loaded metal.
NASA Astrophysics Data System (ADS)
Dong, Siqun; Zhao, Dianli
2018-01-01
This paper studies the subcritical, near-critical and supercritical asymptotic behavior of a reversible random coagulation-fragmentation polymerization process as N → ∞, with the number of distinct ways to form a k-clusters from k units satisfying f(k) =(1 + o (1)) cr-ke-kαk-β, where 0 < α < 1 and β > 0. When the cluster size is small, its distribution is proved to converge to the Gaussian distribution. For the medium clusters, its distribution will converge to Poisson distribution in supercritical stage, and no large clusters exist in this stage. Furthermore, the largest length of polymers of size N is of order ln N in the subcritical stage under α ⩽ 1 / 2.
Cell Size Regulation in Bacteria
NASA Astrophysics Data System (ADS)
Amir, Ariel
2014-05-01
Various bacteria such as the canonical gram negative Escherichia coli or the well-studied gram positive Bacillus subtilis divide symmetrically after they approximately double their volume. Their size at division is not constant, but is typically distributed over a narrow range. Here, we propose an analytically tractable model for cell size control, and calculate the cell size and interdivision time distributions, as well as the correlations between these variables. We suggest ways of extracting the model parameters from experimental data, and show that existing data for E. coli supports partial size control, and a particular explanation: a cell attempts to add a constant volume from the time of initiation of DNA replication to the next initiation event. This hypothesis accounts for the experimentally observed correlations between mother and daughter cells as well as the exponential dependence of size on growth rate.
Cell Division and Evolution of Biological Tissues
NASA Astrophysics Data System (ADS)
Rivier, Nicolas; Arcenegui-Siemens, Xavier; Schliecker, Gudrun
A tissue is a geometrical, space-filling, random cellular network; it remains in this steady state while individual cells divide. Cell division (fragmentation) is a local, elementary topological transformation which establishes statistical equilibrium of the structure. Statistical equilibrium is characterized by observable relations (Lewis, Aboav) between cell shapes, sizes and those of their neighbours, obtained through maximum entropy and topological correlation extending to nearest neighbours only, i.e. maximal randomness. For a two-dimensional tissue (epithelium), the distribution of cell shapes and that of mother and daughter cells can be obtained from elementary geometrical and physical arguments, except for an exponential factor favouring division of larger cells, and exponential and combinatorial factors encouraging a most symmetric division. The resulting distributions are very narrow, and stationarity severely restricts the range of an adjustable structural parameter
Global time-size distribution of volcanic eruptions on Earth.
Papale, Paolo
2018-05-01
Volcanic eruptions differ enormously in their size and impacts, ranging from quiet lava flow effusions along the volcano flanks to colossal events with the potential to affect our entire civilization. Knowledge of the time and size distribution of volcanic eruptions is of obvious relevance for understanding the dynamics and behavior of the Earth system, as well as for defining global volcanic risk. From the analysis of recent global databases of volcanic eruptions extending back to more than 2 million years, I show here that the return times of eruptions with similar magnitude follow an exponential distribution. The associated relative frequency of eruptions with different magnitude displays a power law, scale-invariant distribution over at least six orders of magnitude. These results suggest that similar mechanisms subtend to explosive eruptions from small to colossal, raising concerns on the theoretical possibility to predict the magnitude and impact of impending volcanic eruptions.
Structure in the Proton and the Neutron
DOE R&D Accomplishments Database
Hofstadter, R.
1958-06-01
A survey of the recent work on the structures of the proton and the neutron carried out by high-energy electron-scattering methods is presented. Early work established finite size effects in the proton and led to information about the charge and magnetic density distributions in the proton. The rms size was established to be close to (0.77 plus or minus 0.10) x 10{sup -13} cm, and the density distributions of charge and anomalous magnetic moment were shown to be approximately of the same shape. The form factors could be described in terms of several alternative models given, for example, by an exponential, gaussian, hollow exponential, hollow gaussian, etc., distribution of densities. Many other shapes were excluded by the experimental data. Recent work by Bumiller and Hofstadter now fixes one among these models that is appropriate to the proton and provides an extremely good fit at all angles between energies of 200 and 650 Mev. The new evidence clearly favors the exponential model with rms radius (0.80 plus or minus 0.04) 10{sup -13} cm. Recent studies of the proton have attempted to answer the question: how closely similar are the charge and magnetic form factors? This work now shows that the distributions have the same sizes and shapes to within 10 per cent, and each distribution is given very closely by the exponential model described above with radius (0.80 plus or minus 0.04) x 10{sup -13}. Certain other similar models will be discussed. Early work on the inelastic continuum in the deuteron established that the neutron's magnetic structure was extended and not a point. It was further shown that the neutron's size was approximately the same as that of the proton. This work has recently been extended by Yearian and Hofstadter to a determination of the variation of the neutron's magnetic form factor over the range where the proton's form factor is known. The new results show: (1) the neutron is not a point, (2) the neutron's magnetic structure has a size lying between the limits 0.61 x 10{sup -13} cm and 0.80 x 10{sup -13} cm. The first value (0.61 x 10{sup -13} cm) is determined by examining the total deuteron electro-disintegration cross section at a given angle and incident energy and comparing this cross section with that of the free proton under the same conditions. The second value (0.80 x 10{sup -13} cm) is found by examining the peak of the deuteron electro-disintegration cross section. Because of possible contributions to the total cross section by mesonic exchange effects, the second method is believed to be slightly more accurate. The neutron size is, therefore, approximately (0.70 plus or minus 0.10) x 10{sup -13} cm and probably the larger size 0.90 x 10{sup -13} cm is correct. Thus the magnetic clouds of the neutron and proton are closely the same. The bearing of these results on the validity of electrodynamics is discussed. Because of the small radius implied by the neutron-electron experiments, there is an anomaly between the neutron and the proton. This is represented by the small charge radius for the neutron and the much larger radius of the proton. Additional information of the structure of the deuteron and on the production of pions by electrons is also furnished by the same experiments and will be discussed at the meeting. (auth)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campos, Michael P.; Hendricks, Mark P.; Beecher, Alexander N.
Here, we report a tunable library of N,N,N'-trisubstituted selenourea precursors and their reaction with lead oleate at 60–150 °C to form carboxylate-terminated PbSe nanocrystals in quantitative yields. Single exponential conversion kinetics can be tailored over 4 orders of magnitude by adjusting the selenourea structure. The wide range of conversion reactivity allows the extent of nucleation ([nanocrystal] = 4.6–56.7 μM) and the size following complete precursor conversion (d = 1.7–6.6 nm) to be controlled. Narrow size distributions (σ = 0.5–2%) are obtained whose spectral line widths are dominated (73–83%) by the intrinsic single particle spectral broadening, as observed using spectral holemore » burning measurements. Here, the intrinsic broadening decreases with increasing size (fwhm = 320–65 meV, d = 1.6–4.4 nm) that derives from exciton fine structure and exciton–phonon coupling rather than broadening caused by the size distribution.« less
Campos, Michael P.; Hendricks, Mark P.; Beecher, Alexander N.; ...
2017-01-19
Here, we report a tunable library of N,N,N'-trisubstituted selenourea precursors and their reaction with lead oleate at 60–150 °C to form carboxylate-terminated PbSe nanocrystals in quantitative yields. Single exponential conversion kinetics can be tailored over 4 orders of magnitude by adjusting the selenourea structure. The wide range of conversion reactivity allows the extent of nucleation ([nanocrystal] = 4.6–56.7 μM) and the size following complete precursor conversion (d = 1.7–6.6 nm) to be controlled. Narrow size distributions (σ = 0.5–2%) are obtained whose spectral line widths are dominated (73–83%) by the intrinsic single particle spectral broadening, as observed using spectral holemore » burning measurements. Here, the intrinsic broadening decreases with increasing size (fwhm = 320–65 meV, d = 1.6–4.4 nm) that derives from exciton fine structure and exciton–phonon coupling rather than broadening caused by the size distribution.« less
Small Craters and Their Diagnostic Potential
NASA Astrophysics Data System (ADS)
Bugiolacchi, R.
2017-07-01
I analysed and compared the size-frequency distributions of craters in the Apollo 17 landing region, comprising of six mare terrains with varying morphologies and cratering characteristics, along with three other regions allegedly affected by the same secondary event (Tycho secondary surge). I propose that for the smaller crater sizes (in this work 9-30 m), a] an exponential curve of power -0.18D can approximate Nkm-2 crater densities in a regime of equilibrium, while b] a power function D-3 closely describes the factorised representation of craters by size (1 m). The saturation level within the Central Area suggests that c] either the modelled rates of crater erosion on the Moon should be revised, or that the Tycho event occurred much earlier in time than the current estimate. We propose that d] the size-frequency distribution of small secondary craters may bear the signature (in terms of size-frequency distribution of debris/surge) of the source impact and that this observation should be tested further.
NASA Technical Reports Server (NTRS)
Hacker, Paul T.
1956-01-01
An airborne cloud aeroscope by which droplet size, size distribution, and liquid-water content of clouds can be determined has been developed and tested in flight and in wind tunnels with water sprays. In this aeroscope the cloud droplets are continuously captured in a stream of oil, which Is then photographed by a photomicrographic camera. The droplet size and size distribution can be determined directly from the photographs. With the droplet size distribution known, the liquid-water content of the cloud can be computed from the geometry of the aeroscope, the airspeed, and the oil-flow rate. The aeroscope has the following features: Data are obtained semi-automatically, and permanent data are taken in the form of photographs. A single picture usually contains a sufficient number of droplets to establish the droplet size distribution. Cloud droplets are continuously captured in the stream of oil, but pictures are taken at Intervals. The aeroscope can be operated in icing and non-icing conditions. Because of mixing of oil in the instrument, the droplet-distribution patterns and liquid-water content values from a single picture are exponentially weighted average values over a path length of about 3/4 mile at 150 miles per hour. The liquid-water contents, volume-median diameters, and distribution patterns obtained on test flights and in the Lewis icing tunnel are similar to previously published data.
Statistical distributions of avalanche size and waiting times in an inter-sandpile cascade model
NASA Astrophysics Data System (ADS)
Batac, Rene; Longjas, Anthony; Monterola, Christopher
2012-02-01
Sandpile-based models have successfully shed light on key features of nonlinear relaxational processes in nature, particularly the occurrence of fat-tailed magnitude distributions and exponential return times, from simple local stress redistributions. In this work, we extend the existing sandpile paradigm into an inter-sandpile cascade, wherein the avalanches emanating from a uniformly-driven sandpile (first layer) is used to trigger the next (second layer), and so on, in a successive fashion. Statistical characterizations reveal that avalanche size distributions evolve from a power-law p(S)≈S-1.3 for the first layer to gamma distributions p(S)≈Sαexp(-S/S0) for layers far away from the uniformly driven sandpile. The resulting avalanche size statistics is found to be associated with the corresponding waiting time distribution, as explained in an accompanying analytic formulation. Interestingly, both the numerical and analytic models show good agreement with actual inventories of non-uniformly driven events in nature.
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
The absolute magnitude distribution of cold classical Kuiper belt objects
NASA Astrophysics Data System (ADS)
Petit, Jean-Marc; Bannister, Michele T.; Alexandersen, Mike; Chen, Ying-Tung; Gladman, Brett; Gwyn, Stephen; Kavelaars, JJ; Volk, Kathryn
2016-10-01
We report measurements of the low inclination component of the main Kuiper Belt showing a size freqency distribution very steep for sizes larger than H_r ~ 6.5-7.0 and then a flattening to shallower slope that is still steeper than the collisional equilibrium slope.The Outer Solar System Origins Survey (OSSOS) is ongoing and is expected to detect over 500 TNOs in a precisely calibrated and characterized survey. Combining our current sample with CFEPS and the Alexandersen et al. (2015) survey, we analyse a sample of ~180 low inclination main classical (cold) TNOs, with absolute magnitude H_r (SDSS r' like flter) in the range 5 to 8.8. We confirm that the H_r distribution can be approximated by an exponential with a very steep slope (>1) at the bright end of the distribution, as has been recognized long ago. A transition to a shallower slope occurs around H_r ~ 6.5 - 7.0, an H_r mag identified by Fraster et al (2014). Faintward of this transition, we find a second exponential to be a good approximation at least until H_r ~ 8.5, but with a slope significantly steeper than the one proposed by Fraser et al. (2014) or even the collisional equilibrium value of 0.5.The transition in the cold TNO H_r distribution thus appears to occur at larger sizes than is observed in the high inclination main classical (hot) belt, an important indicator of a different cosmogony for these two sub-components of the main classical Kuiper belt. Given the largish slope faintward of the transition, the cold population with ~100 km diameter may dominate the mass of the Kuiper belt in the 40 AU < a < 47 au region.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Multivariate Analysis and Its Applications
1989-02-14
defined in situations where measurements are taken on natural clusters of individuals like brothers in a family. A number of problems arise in the study of...intraclass correlations. How do we estimate it when observations are available on clusters of different sizes? How do we test the hypothesis that the...the random variable y(X) = #I X + G2X 2 + ... + GmX m , follows an exponential distribution with mean unity. Such a class of life distributions, has a
The perturbed Sparre Andersen model with a threshold dividend strategy
NASA Astrophysics Data System (ADS)
Gao, Heli; Yin, Chuancun
2008-10-01
In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.
NASA Astrophysics Data System (ADS)
Thompson, M.; Kluth, P.; Doerner, R. P.; Kirby, N.; Riley, D.; Corr, C. S.
2016-02-01
Grazing incidence small angle x-ray scattering was performed on tungsten samples exposed to helium plasma in the MAGPIE and Pisces-A linear plasma devices to measure the size distributions of resulting helium nano-bubbles. Nano-bubbles were fitted assuming spheroidal particles and an exponential diameter distribution. These particles had mean diameters between 0.36 and 0.62 nm. Pisces-A exposed samples showed more complex patterns, which may suggest the formation of faceted nano-bubbles or nano-scale surface structures.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
NASA Astrophysics Data System (ADS)
Patade, Sachin; Prabha, T. V.; Axisa, D.; Gayatri, K.; Heymsfield, A.
2015-10-01
A comprehensive analysis of particle size distributions measured in situ with airborne instrumentation during the Cloud Aerosol Interaction and Precipitation Enhancement Experiment (CAIPEEX) is presented. In situ airborne observations in the developing stage of continental convective clouds during premonsoon (PRE), transition, and monsoon (MON) period at temperatures from 25 to -22°C are used in the study. The PRE clouds have narrow drop size and particle size distributions compared to monsoon clouds and showed less development of size spectra with decrease in temperature. Overall, the PRE cases had much lower values of particle number concentrations and ice water content compared to MON cases, indicating large differences in the ice initiation and growth processes between these cloud regimes. This study provided compelling evidence that in addition to dynamics, aerosol and moisture are important for modulating ice microphysical processes in PRE and MON clouds through impacts on cloud drop size distribution. Significant differences are observed in the relationship of the slope and intercept parameters of the fitted particle size distributions (PSDs) with temperature in PRE and MON clouds. The intercept values are higher in MON clouds than PRE for exponential distribution which can be attributed to higher cloud particle number concentrations and ice water content in MON clouds. The PRE clouds tend to have larger values of dispersion of gamma size distributions than MON clouds, signifying narrower spectra. The relationships between PSDs parameters are presented and compared with previous observations.
In Situ Sampling of Terrestrial Dust Devils and Implications for Mars
NASA Astrophysics Data System (ADS)
Raack, J.; Reiss, D.; Balme, M. R.; Taj-Eddine, K.; Ori, G. G.
2017-09-01
We report on first very detailed in situ samples of the relative dust load and the vertical grain size distribution of terrestrial dust devils sampled during two field campaigns in Morocco and their implications for Mars. Our measurements imply, i.e., a similar internal structure for sampled dust devils, despite their different strenghts and dimensions; an exponential decreasing of particle size with height; and that between 60 and 70% of all lifted particles can go into atmospheric suspension.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes.
Uhl, Jonathan T; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A W; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R; Liaw, P K; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A
2015-11-17
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A. W.; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J.; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R.; Liaw, P. K.; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A.
2015-01-01
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes. PMID:26572103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less
Isolation of a variant of Candida albicans.
Buckley, H R; Price, M R; Daneo-Moore, L
1982-01-01
During the course of Candida albicans antigen production, a variant of this organism was encountered which did not produce hyphae at 37 degrees C. Presented here are some of the characteristics of this variant. It produces hyphae at 25 degrees C on cornmeal agar and synthetic medium plus N-acetylglucosamine and Tween 80. At 37 degrees C, it does not produce hyphae on these media, although C. albicans normally does produce hyphae under these circumstances. In liquid synthetic medium, this variant does not produce hyphae at 37 degrees C. The variant strain was analyzed for DNA, RNA, protein content, and particle size. After 50 to 70 h in balanced exponential-phase growth, particle size distribution was narrow, and there were no differences in the DNA, RNA, or protein content per particle in the two strains. When balanced exponential-phase cultures were brought into stationary phase, both strains contained the same amount of DNA per cell. Images PMID:6752021
Isolation of a variant of Candida albicans.
Buckley, H R; Price, M R; Daneo-Moore, L
1982-09-01
During the course of Candida albicans antigen production, a variant of this organism was encountered which did not produce hyphae at 37 degrees C. Presented here are some of the characteristics of this variant. It produces hyphae at 25 degrees C on cornmeal agar and synthetic medium plus N-acetylglucosamine and Tween 80. At 37 degrees C, it does not produce hyphae on these media, although C. albicans normally does produce hyphae under these circumstances. In liquid synthetic medium, this variant does not produce hyphae at 37 degrees C. The variant strain was analyzed for DNA, RNA, protein content, and particle size. After 50 to 70 h in balanced exponential-phase growth, particle size distribution was narrow, and there were no differences in the DNA, RNA, or protein content per particle in the two strains. When balanced exponential-phase cultures were brought into stationary phase, both strains contained the same amount of DNA per cell.
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
A Simulation of the ECSS Help Desk with the Erlang a Model
2011-03-01
a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au
Self-narrowing of size distributions of nanostructures by nucleation antibunching
NASA Astrophysics Data System (ADS)
Glas, Frank; Dubrovskii, Vladimir G.
2017-08-01
We study theoretically the size distributions of ensembles of nanostructures fed from a nanosize mother phase or a nanocatalyst that contains a limited number of the growth species that form each nanostructure. In such systems, the nucleation probability decreases exponentially after each nucleation event, leading to the so-called nucleation antibunching. Specifically, this effect has been observed in individual nanowires grown in the vapor-liquid-solid mode and greatly affects their properties. By performing numerical simulations over large ensembles of nanostructures as well as developing two different analytical schemes (a discrete and a continuum approach), we show that nucleation antibunching completely suppresses fluctuation-induced broadening of the size distribution. As a result, the variance of the distribution saturates to a time-independent value instead of growing infinitely with time. The size distribution widths and shapes primarily depend on the two parameters describing the degree of antibunching and the nucleation delay required to initiate the growth. The resulting sub-Poissonian distributions are highly desirable for improving size homogeneity of nanowires. On a more general level, this unique self-narrowing effect is expected whenever the growth rate is regulated by a nanophase which is able to nucleate an island much faster than it is refilled from a surrounding macroscopic phase.
Hecker, Suzanne; Abrahamson, N.A.; Wooddell, Kathryn
2013-01-01
To investigate the nature of earthquake‐magnitude distributions on faults, we compare the interevent variability of surface displacement at a point on a fault from a composite global data set of paleoseismic observations with the variability expected from two prevailing magnitude–frequency distributions: the truncated‐exponential model and the characteristic‐earthquake model. We use forward modeling to predict the coefficient of variation (CV) for the alternative earthquake distributions, incorporating factors that would effect observations of displacement at a site. The characteristic‐earthquake model (with a characteristic‐magnitude range of ±0.25) produces CV values consistent with the data (CV∼0.5) only if the variability for a given earthquake magnitude is small. This condition implies that rupture patterns on a fault are stable, in keeping with the concept behind the model. This constraint also bears upon fault‐rupture hazard analysis, which, for lack of point‐specific information, has used global scaling relations to infer variability in average displacement for a given‐size earthquake. Exponential distributions of earthquakes (from M 5 to the maximum magnitude) give rise to CV values that are significantly larger than the empirical constraint. A version of the model truncated at M 7, however, yields values consistent with a larger CV (∼0.6) determined for small‐displacement sites. Although this result allows for a difference in the magnitude distribution of smaller surface‐rupturing earthquakes, it may reflect, in part, less stability in the displacement profile of smaller ruptures and/or the tails of larger ruptures.
Cell Size Clues for the Allee Effect in Vegetative Amoeba Suspension Culture
NASA Astrophysics Data System (ADS)
Franck, Carl; Rappazzo, Brendan; Wang, Xiaoning; Segota, Igor
That cells proliferate at higher rates with increasing density helps us appreciate and understand the development of multicellular behavior through the study of dilute cell systems. However, arduous cell counting with a microscope reveals that in the model eukaryote, Dictyostelium discoideum this transition is difficult to ascertain and thereby further explore despite our earlier progress (Phys. Rev. E 77, 041905, (2008)). Here we report preliminary evidence that the slow proliferation phase is well characterized by reduced cell size compared to the wide distribution of cell sizes in the familiar exponential proliferation phase of moderate densities. This observation is enabled by a new system for characterizing cells in stirred suspension cultures. Our technique relies on quickly acquiring magnitude distributions of detected flashes of laser light scattered in situ by cell targets.
NASA Technical Reports Server (NTRS)
Heymsfield, Andrew J.; Bansemer, Aaron; Field, Paul R.; Durden, Stephen L.; Stith, Jeffrey L.; Dye, James E.; Hall, William; Grainger, Cedric A.
2002-01-01
In this study, we report on the evolution of particle size distributions (PSDs) and habits as measured during slow, Lagrangian-type spiral descents through deep subtropical and tropical cloud layers in Florida, Brazil, and Kwajalein, Marshall Islands, most of which were precipitating. The objective of the flight patterns was to learn more about how the PSDs evolved in the vertical and to obtain information of the vertical structure of microphysical properties. New instrumentation yielding better information on the concentrations of particles in the size (D) range between 0.2 and 2 cm, as well as improved particle imagery, produced more comprehensive observations for tropical stratiform precipitation regions and anvils than have been available previously. Collocated radar observations provided additional information on the vertical structure of the cloud layers sampled. Most of the spirals began at cloud top, with temperatures (T) as low as -50 C, and ended at cloud base or below the melting layer (ML). The PSDs broadened from cloud top towards cloud base, with the largest particles increasing in size from several millimeters at cloud top to one centimeter or larger towards cloud base. Some continued growth was noted in the upper part of the ML. Concentrations of particles less than 1 mm in size decreased with decreasing height. The result was a consistent change in the PSDs in the vertical. Similarly, systematic changes in the size dependence of the particle cross-sectional area was noted with decreasing height. Aggregation-as ascertained from both the changes in the PSDs and evolution of particle habits as observed in high detail with the cloud particle imager (CPI) probe-was responsible for these trends. The PSDs were generally well-represented by gamma distributions of the form N = N0 gamma D microns e- lambda gamma D that were fitted to the PSDs over 1-km horizontal intervals throughout the spirals. The intercept (N0 gamma), slope (lambda gamma), and dispersion (microns) values were derived for each PSD. Exponential curves (N = N0e- lambdaD; micron = 0) were also fitted to the distributions. The lambda gamma values for given spirals varied systematically with temperature as did the values of lambda (exponential), and the data generally conformed to values found in previous studies involving exponential fits to size distributions in mid-latitude frontal and cirrus layers. Considerable variability often noted in the PSD properties during the loops of individual spirals was manifested primarily in large changes in N0 gamma and N0, but micron, lambda gamma and lambda remained fairly stable. Temperature is not found to be the sole factor controlling lambda gamma or lambda but is a primary one. Direct relationships were found between lambda gamma and N0 gamma or lambda gamma and micron for the gamma distributions and lambda and N0 for the exponential. The latter relationship was not found as distinctly in earlier studies; observed PSDs in this study had better fidelity with less scatter. The micron values changed monotonically with T over the range of temperatures and were directly related to N0 gamma or lambda gamma, thereby reducing the number of variables in the PSD functional equation to two. In the upper part of the ML, N0 and lambda continued to decrease, and in the lower part these values began to increase as the largest particles melted. We developed general expressions relating various bulk microphysical, radar, and radiative transfer-related variables to N0 gamma and lambda gamma, useful for both tropical and mid-latitude clouds. These relationships facilitate the specification of a number of bulk properties in cloud and climate models. The results presented in this paper apply best to temperatures between 0 and -40 C, for which the measured radar reflectivities fall in the range of 0 to 25 dBZe.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Rashel, Rakib H; Patiño, Reynaldo
2017-06-01
Salinity (5-30) effects on golden alga growth were determined at a standard laboratory temperature (22°C) and one associated with natural blooms (13°C). Inoculum-size effects were determined over a wide size range (100-100,000cellsml -1 ). A strain widely distributed in the USA, UTEX-2797 was the primary study subject but another of limited distribution, UTEX-995 was used to evaluate growth responses in relation to genetic background. Variables examined were exponential growth rate (r), maximum cell density (max-D) and, when inoculum size was held constant (100cellsml -1 ), density at onset of exponential growth (early-D). In UTEX-2797, max-D increased as salinity increased from 5 to ∼10-15 and declined thereafter regardless of temperature but r remained generally stable and only declined at salinity of 25-30. In addition, max-D correlated positively with r and early-D, the latter also being numerically highest at salinity of 15. In UTEX-995, max-D and r responded similarly to changes in salinity - they remained stable at salinity of 5-10 and 5-15, respectively, and declined at higher salinity. Also, max-D correlated with r but not early-D. Inoculum size positively and negatively influenced max-D and r, respectively, in both strains and these effects were significant even when the absolute size difference was small (100 versus 1000 cells ml -1 ). When cultured under similar conditions, UTEX-2797 grew faster and to much higher density than UTEX-995. In conclusion, (1) UTEX-2797's superior growth performance may explain its relatively wide distribution in the USA, (2) the biphasic growth response of UTEX-2797 to salinity variation, with peak abundance at salinity of 10-15, generally mirrors golden alga abundance-salinity associations in US inland waters, and (3) early cell density - whether artificially manipulated or naturally attained - can influence UTEX-2797 bloom potential. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Sazuka, Naoya
2007-03-01
We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.
Mächtle, W
1999-01-01
Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
Gravitational Effects on Closed-Cellular-Foam Microstructure
NASA Technical Reports Server (NTRS)
Noever, David A.; Cronise, Raymond J.; Wessling, Francis C.; McMannus, Samuel P.; Mathews, John; Patel, Darayas
1996-01-01
Polyurethane foam has been produced in low gravity for the first time. The cause and distribution of different void or pore sizes are elucidated from direct comparison of unit-gravity and low-gravity samples. Low gravity is found to increase the pore roundness by 17% and reduce the void size by 50%. The standard deviation for pores becomes narrower (a more homogeneous foam is produced) in low gravity. Both a Gaussian and a Weibull model fail to describe the statistical distribution of void areas, and hence the governing dynamics do not combine small voids in either a uniform or a dependent fashion to make larger voids. Instead, the void areas follow an exponential law, which effectively randomizes the production of void sizes in a nondependent fashion consistent more with single nucleation than with multiple or combining events.
Coagulation-Fragmentation Model for Animal Group-Size Statistics
NASA Astrophysics Data System (ADS)
Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.
2017-04-01
We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.
Seamount statistics in the Pacific Ocean
NASA Astrophysics Data System (ADS)
Smith, Deborah K.; Jordan, Thomas H.
1988-04-01
We apply the wide-beam sampling technique of Jordan et al. (1983) to approximately 157,000 km of wide-beam profiles to obtain seamount population statistics for eight regions in the eastern and southern Pacific Ocean. Population statistics derived from wide-beam echograms are compared with seamount counts from Sea Beam swaths and with counts from bathymetric maps. We find that the average number of seamounts with summit heights h ≥ H is well-approximated by the exponential frequency-size distribution: ν(H)=νoe-βH. The exponential model for seamount sizes, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model, which has no intrinsic scale, in describing the average distribution of Pacific seamounts, and it appears to be valid over a size spectrum spanning 5 orders of magnitude in abundance. Large-scale regional variations in seamount populations are documented. We observe significant differences in seamount densities across the Murray fracture zone in the North Pacific and the Eltanin fracture zone system in the South Pacific. The Eltanin discontinuity is equally evident on both sides of the Pacific-Antarctic ridge. In the South Pacific, regions symmetrically disposed about the ridge axis have very similar seamount densities, despite the large difference between Pacific plate and Antarctic plate absolute velocities; evidently, any differences in the shear flows at the base of the Pacific and Antarctic plates do not affect seamount emplacement. Systematic variations in νo and β are observed as a function of lithospheric age, with the number of large seamounts increasing more rapidly than small seamounts. These observations have been used to develop a simple model for seamount production under the assumptions that (1) an exponential size-frequency distribution is maintained, (2) production is steady state, and (3) most small seamounts are formed on or near the ridge axis. The limited data available from this study appear to be consistent with the model, but they are insufficient to provide a rigorous test of the assumptions or determine accurately the model parameters. However, the data from the South Pacific indicate that the off-axis production of large seamounts probably accounts for the majority of seamounts with summit heights greater than 1000 m.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; ...
2015-11-17
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
From the Cover: The growth of business firms: Theoretical framework and empirical evidence
NASA Astrophysics Data System (ADS)
Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S. V.; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H. Eugene
2005-12-01
We introduce a model of proportional growth to explain the distribution Pg(g) of business-firm growth rates. The model predicts that Pg(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships. proportional growth | preferential attachment | Laplace distribution
Mutant number distribution in an exponentially growing population
NASA Astrophysics Data System (ADS)
Keller, Peter; Antal, Tibor
2015-01-01
We present an explicit solution to a classic model of cell-population growth introduced by Luria and Delbrück (1943 Genetics 28 491-511) 70 years ago to study the emergence of mutations in bacterial populations. In this model a wild-type population is assumed to grow exponentially in a deterministic fashion. Proportional to the wild-type population size, mutants arrive randomly and initiate new sub-populations of mutants that grow stochastically according to a supercritical birth and death process. We give an exact expression for the generating function of the total number of mutants at a given wild-type population size. We present a simple expression for the probability of finding no mutants, and a recursion formula for the probability of finding a given number of mutants. In the ‘large population-small mutation’ limit we recover recent results of Kessler and Levine (2014 J. Stat. Phys. doi:10.1007/s10955-014-1143-3) for a fully stochastic version of the process.
Abe, Sumiyoshi
2002-10-01
The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.
The growth of business firms: theoretical framework and empirical evidence.
Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S V; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H Eugene
2005-12-27
We introduce a model of proportional growth to explain the distribution P(g)(g) of business-firm growth rates. The model predicts that P(g)(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent zeta = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships.
Paul A. Murphy; Robert M. Farrar
1981-01-01
In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.
NASA Astrophysics Data System (ADS)
Celli, Jonathan P.; Rizvi, Imran; Evans, Conor L.; Abu-Yousif, Adnan O.; Hasan, Tayyaba
2010-09-01
Three-dimensional tumor models have emerged as valuable in vitro research tools, though the power of such systems as quantitative reporters of tumor growth and treatment response has not been adequately explored. We introduce an approach combining a 3-D model of disseminated ovarian cancer with high-throughput processing of image data for quantification of growth characteristics and cytotoxic response. We developed custom MATLAB routines to analyze longitudinally acquired dark-field microscopy images containing thousands of 3-D nodules. These data reveal a reproducible bimodal log-normal size distribution. Growth behavior is driven by migration and assembly, causing an exponential decay in spatial density concomitant with increasing mean size. At day 10, cultures are treated with either carboplatin or photodynamic therapy (PDT). We quantify size-dependent cytotoxic response for each treatment on a nodule by nodule basis using automated segmentation combined with ratiometric batch-processing of calcein and ethidium bromide fluorescence intensity data (indicating live and dead cells, respectively). Both treatments reduce viability, though carboplatin leaves micronodules largely structurally intact with a size distribution similar to untreated cultures. In contrast, PDT treatment disrupts micronodular structure, causing punctate regions of toxicity, shifting the distribution toward smaller sizes, and potentially increasing vulnerability to subsequent chemotherapeutic treatment.
Modeling the Dependency Structure of Integrated Intensity Processes
Ma, Yong-Ki
2015-01-01
This paper studies an important issue of dependence structure. To model this structure, the intensities within the Cox processes are driven by dependent shot noise processes, where jumps occur simultaneously and their sizes are correlated. The joint survival probability of the integrated intensities is explicitly obtained from the copula with exponential marginal distributions. Subsequently, this result can provide a very useful guide for credit risk management. PMID:26270638
Shneidman, Vitaly A
2009-10-28
A typical nucleation-growth process is considered: a system is quenched into a supersaturated state with a small critical radius r( *) (-) and is allowed to nucleate during a finite time interval t(n), after which the supersaturation is abruptly reduced to a fixed value with a larger critical radius r( *) (+). The size-distribution of nucleated particles f(r,t) further evolves due to their deterministic growth and decay for r larger or smaller than r( *) (+), respectively. A general analytic expressions for f(r,t) is obtained, and it is shown that after a large growth time t this distribution approaches an asymptotic shape determined by two dimensionless parameters, lambda related to t(n), and Lambda=r( *) (+)/r( *) (-). This shape is strongly asymmetric with an exponential and double-exponential cutoffs at small and large sizes, respectively, and with a broad near-flat top in case of a long pulse. Conversely, for a short pulse the distribution acquires a distinct maximum at r=r(max)(t) and approaches a universal shape exp[zeta-e(zeta)], with zeta proportional to r-r(max), independent of the pulse duration. General asymptotic predictions are examined in terms of Zeldovich-Frenkel nucleation model where the entire transient behavior can be described in terms of the Lambert W function. Modifications for the Turnbull-Fisher model are also considered, and analytics is compared with exact numerics. Results are expected to have direct implementations in analysis of two-step annealing crystallization experiments, although other applications might be anticipated due to universality of the nucleation pulse technique.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Traction forces during collective cell motion.
Gov, N S
2009-08-01
Collective motion of cell cultures is a process of great interest, as it occurs during morphogenesis, wound healing, and tumor metastasis. During these processes cell cultures move due to the traction forces induced by the individual cells on the surrounding matrix. A recent study [Trepat, et al. (2009). Nat. Phys. 5, 426-430] measured for the first time the traction forces driving collective cell migration and found that they arise throughout the cell culture. The leading 5-10 rows of cell do play a major role in directing the motion of the rest of the culture by having a distinct outwards traction. Fluctuations in the traction forces are an order of magnitude larger than the resultant directional traction at the culture edge and, furthermore, have an exponential distribution. Such exponential distributions are observed for the sizes of adhesion domains within cells, the traction forces produced by single cells, and even in nonbiological nonequilibrium systems, such as sheared granular materials. We discuss these observations and their implications for our understanding of cellular flows within a continuous culture.
NASA Astrophysics Data System (ADS)
Kempa, Wojciech M.
2017-12-01
A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
Rashel, Rakib H.; Patino, Reynaldo
2017-01-01
Salinity (5–30) effects on golden alga growth were determined at a standard laboratory temperature (22 °C) and one associated with natural blooms (13 °C). Inoculum-size effects were determined over a wide size range (100–100,000 cells ml−1). A strain widely distributed in the USA, UTEX-2797 was the primary study subject but another of limited distribution, UTEX-995 was used to evaluate growth responses in relation to genetic background. Variables examined were exponential growth rate (r), maximum cell density (max-D) and, when inoculum size was held constant (100 cells ml−1), density at onset of exponential growth (early-D). In UTEX-2797, max-D increased as salinity increased from 5 to ∼10–15 and declined thereafter regardless of temperature but r remained generally stable and only declined at salinity of 25–30. In addition, max-D correlated positively with r and early-D, the latter also being numerically highest at salinity of 15. In UTEX-995, max-D and r responded similarly to changes in salinity − they remained stable at salinity of 5–10 and 5–15, respectively, and declined at higher salinity. Also, max-D correlated with r but not early-D. Inoculum size positively and negatively influenced max-D and r, respectively, in both strains and these effects were significant even when the absolute size difference was small (100 versus 1000 cells ml−1). When cultured under similar conditions, UTEX-2797 grew faster and to much higher density than UTEX-995. In conclusion, (1) UTEX-2797’s superior growth performance may explain its relatively wide distribution in the USA, (2) the biphasic growth response of UTEX-2797 to salinity variation, with peak abundance at salinity of 10–15, generally mirrors golden alga abundance-salinity associations in US inland waters, and (3) early cell density – whether artificially manipulated or naturally attained – can influence UTEX-2797 bloom potential.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Reynolds, Andy M; Schultheiss, Patrick; Cheng, Ken
2014-01-07
We suggest that the Australian desert ant Melophorus bagoti approximates a Lévy search pattern by using an intrinsic bi-exponential walk and does so when a Lévy search pattern is advantageous. When attempting to locate its nest, M. bagoti adopt a stereotypical search pattern. These searches begin at the location where the ant expects to find the nest, and comprise loops that start and end at this location, and are directed in different azimuthal directions. Loop lengths are exponentially distributed when searches are in visually familiar surroundings and are well described by a mixture of two exponentials when searches are in unfamiliar landscapes. The latter approximates a power-law distribution, the hallmark of a Lévy search. With the aid of a simple analytically tractable theory, we show that an exponential loop-length distribution is advantageous when the distance to the nest can be estimated with some certainty and that a bi-exponential distribution is advantageous when there is considerable uncertainty regarding the nest location. The best bi-exponential search patterns are shown to be those that come closest to approximating advantageous Lévy looping searches. The bi-exponential search patterns of M. bagoti are found to approximate advantageous Lévy search patterns. Copyright © 2013. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Rhodes, N.; Hurtado, J. M.
2013-05-01
Features such as the Home Plate plateau on Mars, a suspected remnant of a phreatomagmatic eruption, can reveal important information about paleohydrologic conditions. The types and sizes of pyroclastic rocks produced by a phreatomagmatic eruption are indicative of the behavior of the explosion and the characteristics of the groundwater reservoir. Analysis of the pyroclast size distribution can be used to determine magma volatile content. We conduct an analysis of pyroclast size distribution using Ground Penetrating Radar (GPR) to make a quantitative estimate of the presence of past groundwater at Kilbourne Hole, a well-known phreatomagmatic crater located in southern Dona Ana County, New Mexico. As basaltic magma intruded the groundwater reservoir in the mid-Pleistocene, the water vaporized and caused a phreatomagmatic explosion that excavated the 2-km wide and 200-m deep depression. The pyroclastic units produced during a phreatomagmatic explosion are proportional to the size and the duration of the explosion and the size of the groundwater reservoir such that the wetter the eruption, the stronger the explosion. In a violent volcanic eruption, magma changes from a liquid into solid fragments and the explosion releases kinetic energy (Ek) by ejecting liquid water, vapor water (with mass Mw) and solid fragments (with mass Mf) at an ejection velocity (Ve). In order to determine Mw, we must know Ve. The relationship between Ve and the distance from center of the eruption (R) is such that Ve exponentially decreases with time (t) and R. A numerical model relating pyroclast size and Ve for material ejected in Hawaiian and Plinian eruptions shows that clast size also exponentially decreases with decreasing Ve. Based on these relationships, we use GPR to map the ejected clast size distribution as a function of distance from the edge of Kilbourne Hole in an effort to determine Ve and Mw. GPR surveys were performed in January 2012 and January 2013 using a Noggins 250 MHz radar system. We designed the surveys to detect volcanic bombs in the shallow subsurface and to map radial variations in their sizes. Six GPR lines were extended radially in each cardinal direction from the rim of Kilbourne Hole, and, as a control, fifteen short GPR lines were performed along an accessible cliff where visible volcanic bombs and blocks are exposed. We are able to visualize 58 bombs and blocks along one of the six GPR lines within the maximum penetration depth of 2.4-3.2 m. From the resulting GPR profiles, we measured the width and the length of the bombs. The largest dimension of each bomb was plotted against distance from crater rim, and the obtained exponential relationship between bomb size and distance will be applied to a numerical model of ejecta dispersal from transient volcanic explosions to solve for Ve and Mw. This case study at Kilbourne Hole serves as a planetary analog for similar surveys that could be done on Mars and on the Moon.
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.
Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C
2017-06-01
The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
A Field Study of Pixel-Scale Variability of Raindrop Size Distribution in the MidAtlantic Region
NASA Technical Reports Server (NTRS)
Tokay, Ali; D'adderio, Leo Pio; Wolff, David P.; Petersen, Walter A.
2016-01-01
The spatial variability of parameters of the raindrop size distribution and its derivatives is investigated through a field study where collocated Particle Size and Velocity (Parsivel2) and two-dimensional video disdrometers were operated at six sites at Wallops Flight Facility, Virginia, from December 2013 to March 2014. The three-parameter exponential function was employed to determine the spatial variability across the study domain where the maximum separation distance was 2.3 km. The nugget parameter of the exponential function was set to 0.99 and the correlation distance d0 and shape parameter s0 were retrieved by minimizing the root-mean-square error, after fitting it to the correlations of physical parameters. Fits were very good for almost all 15 physical parameters. The retrieved d0 and s0 were about 4.5 km and 1.1, respectively, for rain rate (RR) when all 12 disdrometers were reporting rainfall with a rain-rate threshold of 0.1 mm h1 for 1-min averages. The d0 decreased noticeably when one or more disdrometers were required to report rain. The d0 was considerably different for a number of parameters (e.g., mass-weighted diameter) but was about the same for the other parameters (e.g., RR) when rainfall threshold was reset to 12 and 18 dBZ for Ka- and Ku-band reflectivity, respectively, following the expected Global Precipitation Measurement missions spaceborne radar minimum detectable signals. The reduction of the database through elimination of a site did not alter d0 as long as the fit was adequate. The correlations of 5-min rain accumulations were lower when disdrometer observations were simulated for a rain gauge at different bucket sizes.
Crowding Induces Complex Ergodic Diffusion and Dynamic Elongation of Large DNA Molecules
Chapman, Cole D.; Gorczyca, Stephanie; Robertson-Anderson, Rae M.
2015-01-01
Despite the ubiquity of molecular crowding in living cells, the effects of crowding on the dynamics of genome-sized DNA are poorly understood. Here, we track single, fluorescent-labeled large DNA molecules (11, 115 kbp) diffusing in dextran solutions that mimic intracellular crowding conditions (0–40%), and determine the effects of crowding on both DNA mobility and conformation. Both DNAs exhibit ergodic Brownian motion and comparable mobility reduction in all conditions; however, crowder size (10 vs. 500 kDa) plays a critical role in the underlying diffusive mechanisms and dependence on crowder concentration. Surprisingly, in 10-kDa dextran, crowder influence saturates at ∼20% with an ∼5× drop in DNA diffusion, in stark contrast to exponentially retarded mobility, coupled to weak anomalous subdiffusion, with increasing concentration of 500-kDa dextran. Both DNAs elongate into lower-entropy states (compared to random coil conformations) when crowded, with elongation states that are gamma distributed and fluctuate in time. However, the broadness of the distribution of states and the time-dependence and length scale of elongation length fluctuations depend on both DNA and crowder size with concentration having surprisingly little impact. Results collectively show that mobility reduction and coil elongation of large crowded DNAs are due to a complex interplay between entropic effects and crowder mobility. Although elongation and initial mobility retardation are driven by depletion interactions, subdiffusive dynamics, and the drastic exponential slowing of DNA, up to ∼300×, arise from the reduced mobility of larger crowders. Our results elucidate the highly important and widely debated effects of cellular crowding on genome-sized DNA. PMID:25762333
A UNIVERSAL NEUTRAL GAS PROFILE FOR NEARBY DISK GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bigiel, F.; Blitz, L., E-mail: bigiel@uni-heidelberg.de
2012-09-10
Based on sensitive CO measurements from HERACLES and H I data from THINGS, we show that the azimuthally averaged radial distribution of the neutral gas surface density ({Sigma}{sub HI}+ {Sigma}{sub H2}) in 33 nearby spiral galaxies exhibits a well-constrained universal exponential distribution beyond 0.2 Multiplication-Sign r{sub 25} (inside of which the scatter is large) with less than a factor of two scatter out to two optical radii r{sub 25}. Scaling the radius to r{sub 25} and the total gas surface density to the surface density at the transition radius, i.e., where {Sigma}{sub HI} and {Sigma}{sub H2} are equal, as wellmore » as removing galaxies that are interacting with their environment, yields a tightly constrained exponential fit with average scale length 0.61 {+-} 0.06 r{sub 25}. In this case, the scatter reduces to less than 40% across the optical disks (and remains below a factor of two at larger radii). We show that the tight exponential distribution of neutral gas implies that the total neutral gas mass of nearby disk galaxies depends primarily on the size of the stellar disk (influenced to some degree by the great variability of {Sigma}{sub H2} inside 0.2 Multiplication-Sign r{sub 25}). The derived prescription predicts the total gas mass in our sub-sample of 17 non-interacting disk galaxies to within a factor of two. Given the short timescale over which star formation depletes the H{sub 2} content of these galaxies and the large range of r{sub 25} in our sample, there appears to be some mechanism leading to these largely self-similar radial gas distributions in nearby disk galaxies.« less
Torres-Sanchez, C; Al Mushref, F R A; Norrito, M; Yendall, K; Liu, Y; Conway, P P
2017-08-01
The effect of pore size and porosity on elastic modulus, strength, cell attachment and cell proliferation was studied for Ti porous scaffolds manufactured via powder metallurgy and sintering. Porous scaffolds were prepared in two ranges of porosities so that their mechanical properties could mimic those of cortical and trabecular bone respectively. Space-holder engineered pore size distributions were carefully determined to study the impact that small changes in pore size may have on mechanical and biological behaviour. The Young's moduli and compressive strengths were correlated with the relative porosity. Linear, power and exponential regressions were studied to confirm the predictability in the characterisation of the manufactured scaffolds and therefore establish them as a design tool for customisation of devices to suit patients' needs. The correlations were stronger for the linear and the power law regressions and poor for the exponential regressions. The optimal pore microarchitecture (i.e. pore size and porosity) for scaffolds to be used in bone grafting for cortical bone was set to <212μm with volumetric porosity values of 27-37%, and for trabecular tissues to 300-500μm with volumetric porosity values of 54-58%. The pore size range 212-300μm with volumetric porosity values of 38-56% was reported as the least favourable to cell proliferation in the longitudinal study of 12days of incubation. Copyright © 2017 Elsevier B.V. All rights reserved.
Effect of reaction-step-size noise on the switching dynamics of stochastic populations
NASA Astrophysics Data System (ADS)
Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael
2016-05-01
In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Schneider, Markus P. A.
This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.
NASA Astrophysics Data System (ADS)
Stünitz, Holger; Keulen, Nynke; Hirose, Takehiro; Heilbronner, Renée
2010-01-01
Microstructures and grain size distribution from high velocity friction experiments are compared with those of slow deformation experiments of Keulen et al. (2007, 2008) for the same material (Verzasca granitoid). The mechanical behavior of granitoid gouge in fast velocity friction experiments at slip rates of 0.65 and 1.28 m/s and normal stresses of 0.4-0.9 MPa is characterized by slip weakening in a typical exponential friction coefficient vs displacement relationship. The grain size distributions yield similar D-values (slope of frequency versus grain size curve = 2.2-2.3) as those of slow deformation experiments (D = 2.0-2.3) for grain sizes larger than 1 μm. These values are independent of the total displacement above a shear strain of about γ = 20. The D-values are also independent of the displacement rates in the range of ˜1 μm/s to ˜1.3 m/s and do not vary in the normal stress range between 0.5 MPa and 500 MPa. With increasing displacement, grain shapes evolve towards more rounded and less serrated grains. While the grain size distribution remains constant, the progressive grain shape evolution suggests that grain comminution takes place by attrition at clast boundaries. Attrition produces a range of very small grain sizes by crushing with a D <-value = 1. The results of the study demonstrate that most cataclastic and gouge fault zones may have resulted from seismic deformation but the distinction of seismic and aseismic deformation cannot be made on the basis of grain size distribution.
Distribution of fixed beneficial mutations and the rate of adaptation in asexual populations
Good, Benjamin H.; Rouzine, Igor M.; Balick, Daniel J.; Hallatschek, Oskar; Desai, Michael M.
2012-01-01
When large asexual populations adapt, competition between simultaneously segregating mutations slows the rate of adaptation and restricts the set of mutations that eventually fix. This phenomenon of interference arises from competition between mutations of different strengths as well as competition between mutations that arise on different fitness backgrounds. Previous work has explored each of these effects in isolation, but the way they combine to influence the dynamics of adaptation remains largely unknown. Here, we describe a theoretical model to treat both aspects of interference in large populations. We calculate the rate of adaptation and the distribution of fixed mutational effects accumulated by the population. We focus particular attention on the case when the effects of beneficial mutations are exponentially distributed, as well as on a more general class of exponential-like distributions. In both cases, we show that the rate of adaptation and the influence of genetic background on the fixation of new mutants is equivalent to an effective model with a single selection coefficient and rescaled mutation rate, and we explicitly calculate these effective parameters. We find that the effective selection coefficient exactly coincides with the most common fixed mutational effect. This equivalence leads to an intuitive picture of the relative importance of different types of interference effects, which can shift dramatically as a function of the population size, mutation rate, and the underlying distribution of fitness effects. PMID:22371564
Polanski, A; Kimmel, M; Chakraborty, R
1998-05-12
Distribution of pairwise differences of nucleotides from data on a sample of DNA sequences from a given segment of the genome has been used in the past to draw inferences about the past history of population size changes. However, all earlier methods assume a given model of population size changes (such as sudden expansion), parameters of which (e.g., time and amplitude of expansion) are fitted to the observed distributions of nucleotide differences among pairwise comparisons of all DNA sequences in the sample. Our theory indicates that for any time-dependent population size, N(tau) (in which time tau is counted backward from present), a time-dependent coalescence process yields the distribution, p(tau), of the time of coalescence between two DNA sequences randomly drawn from the population. Prediction of p(tau) and N(tau) requires the use of a reverse Laplace transform known to be unstable. Nevertheless, simulated data obtained from three models of monotone population change (stepwise, exponential, and logistic) indicate that the pattern of a past population size change leaves its signature on the pattern of DNA polymorphism. Application of the theory to the published mtDNA sequences indicates that the current mtDNA sequence variation is not inconsistent with a logistic growth of the human population.
NASA Astrophysics Data System (ADS)
Clotet, Xavier; Santucci, Stéphane; Ortín, Jordi
2016-01-01
We report the results of an experimental investigation of the spatiotemporal dynamics of stable imbibition fronts in a disordered medium, in the regime of capillary disorder, for a wide range of experimental conditions. We have used silicone oils of various viscosities μ and nearly identical oil-air surface tension, and forced them to slowly invade a model open fracture at very different flow rates v . In this second part of the study we have carried out a scale-dependent statistical analysis of the front dynamics. We have specifically analyzed the influence of μ and v on the statistical properties of the velocity Vℓ, the spatial average of the local front velocities over a window of lateral size ℓ . We have varied ℓ from the local scale defined by our spatial resolution up to the lateral system size L . Even though the imposed flow rate is constant, the signals Vℓ(t ) present very strong fluctuations which evolve systematically with the parameters μ , v , and ℓ . We have verified that the non-Gaussian fluctuations of the global velocity Vℓ(t ) are very well described by a generalized Gumbel statistics. The asymmetric shape and the exponential tail of those distributions are controlled by the number of effective degrees of freedom of the imbibition fronts, given by Neff=ℓ /ℓc (the ratio of the lateral size of the measuring window ℓ to the correlation length ℓc˜1 /√{μ v } ). The large correlated excursions of Vℓ(t ) correspond to global avalanches, which reflect extra displacements of the imbibition fronts. We show that global avalanches are power-law distributed, both in sizes and durations, with robustly defined exponents—independent of μ , v , and ℓ . Nevertheless, the exponential upper cutoffs of the distributions evolve systematically with those parameters. We have found, moreover, that maximum sizes ξS and maximum durations ξT of global avalanches are not controlled by the same mechanism. While ξS are also determined by ℓ /ℓc , like the amplitude fluctuations of Vℓ(t ) , ξT and the temporal correlations of Vℓ(t ) evolve much more strongly with imposed flow rate v than with fluid viscosity μ .
Origin of stretched-exponential photoluminescence relaxation in size-separated silicon nanocrystals
Brown, Samuel L.; Krishnan, Retheesh; Elbaradei, Ahmed; ...
2017-05-25
A detailed understanding of the photoluminescence (PL) from silicon nanocrystals (SiNCs) is convoluted by the complexity of the decay mechanism, including a stretched-exponential relaxation and the presence of both nanosecond and microsecond time scales. In this publication, we analyze the microsecond PL decay of size-resolved SiNC fractions in both full-spectrum (FS) and spectrally resolved (SR) configurations, where the stretching exponent and lifetime are used to deduce a probability distribution function (PDF) of decay rates. For the PL decay measured at peak emission, we find a systematic shift and narrowing of the PDF in comparison to the FS measurements. In amore » similar fashion, we resolve the PL lifetime of the ‘blue’, ‘peak’, and ‘red’ regions of the spectrum and map PL decays of different photon energy onto their corresponding location in the PDF. Furthermore, a general trend is observed where higher and lower photon energies are correlated with shorter and longer lifetimes, respectively, which we relate to the PL line width and electron-phonon coupling.« less
Explaining mortality rate plateaus
Weitz, Joshua S.; Fraser, Hunter B.
2001-01-01
We propose a stochastic model of aging to explain deviations from exponential growth in mortality rates commonly observed in empirical studies. Mortality rate plateaus are explained as a generic consequence of considering death in terms of first passage times for processes undergoing a random walk with drift. Simulations of populations with age-dependent distributions of viabilities agree with a wide array of experimental results. The influence of cohort size is well accounted for by the stochastic nature of the model. PMID:11752476
Multiserver Queueing Model subject to Single Exponential Vacation
NASA Astrophysics Data System (ADS)
Vijayashree, K. V.; Janani, B.
2018-04-01
A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.
NASA Astrophysics Data System (ADS)
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Mathematical Modeling of Extinction of Inhomogeneous Populations
Karev, G.P.; Kareva, I.
2016-01-01
Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117
Dowty, James G; Byrnes, Graham B; Gertig, Dorota M
2014-12-01
Ductal carcinoma in situ (DCIS) lesions are non-invasive tumours of the breast that are thought to precede most invasive breast cancers (IBCs). As individual DCIS lesions are initiated, grow and invade (i.e. become IBC), the size distribution of the DCIS lesions present in a given human population will evolve. We derive a differential equation governing this evolution and show, for given assumptions about growth and invasion, that there is a unique distribution which does not vary with time. Further, we show that any initial distribution converges to this stationary distribution exponentially quickly. Therefore, it is reasonable to assume that the stationary distribution governs the size of DCIS lesions in human populations which are relatively stable with respect to the determinants of breast cancer. Based on this assumption and the size data of 110 DCIS lesions detected in a mammographic screening programme between 1993 and 2000, we produce maximum likelihood estimates for certain growth and invasion parameters. Assuming that DCIS size is proportional to a positive power p of the time since tumour initiation, we estimate p to be 0.50 with a 95% confidence interval of (0.35, 0.71). Therefore, we estimate that DCIS lesions follow a square-root growth law and hence that they grow rapidly when small and relatively slowly when large. Our approach and results should be useful for other mathematical studies of cancer, especially those investigating biological mechanisms of invasion. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Phadnis, Milind A; Wetmore, James B; Mayo, Matthew S
2017-11-20
Traditional methods of sample size and power calculations in clinical trials with a time-to-event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment. Copyright © 2017 John Wiley & Sons, Ltd.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems
NASA Astrophysics Data System (ADS)
Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.
2010-09-01
We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.
Intervention-Based Stochastic Disease Eradication
NASA Astrophysics Data System (ADS)
Billings, Lora; Mier-Y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira
2013-03-01
Disease control is of paramount importance in public health with infectious disease extinction as the ultimate goal. Intervention controls, such as vaccination of susceptible individuals and/or treatment of infectives, are typically based on a deterministic schedule, such as periodically vaccinating susceptible children based on school calendars. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and the speed of infection. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. Supported by the National Institute of General Medical Sciences Award No. R01GM090204
Statistical steady states in turbulent droplet condensation
NASA Astrophysics Data System (ADS)
Bec, Jeremie; Krstulovic, Giorgio; Siewert, Christoph
2017-11-01
We investigate the general problem of turbulent condensation. Using direct numerical simulations we show that the fluctuations of the supersaturation field offer different conditions for the growth of droplets which evolve in time due to turbulent transport and mixing. This leads to propose a Lagrangian stochastic model consisting of a set of integro-differential equations for the joint evolution of the squared radius and the supersaturation along droplet trajectories. The model has two parameters fixed by the total amount of water and the thermodynamic properties, as well as the Lagrangian integral timescale of the turbulent supersaturation. The model reproduces very well the droplet size distributions obtained from direct numerical simulations and their time evolution. A noticeable result is that, after a stage where the squared radius simply diffuses, the system converges exponentially fast to a statistical steady state independent of the initial conditions. The main mechanism involved in this convergence is a loss of memory induced by a significant number of droplets undergoing a complete evaporation before growing again. The statistical steady state is characterised by an exponential tail in the droplet mass distribution.
Quantifying the effect of 3D spatial resolution on the accuracy of microstructural distributions
NASA Astrophysics Data System (ADS)
Loughnane, Gregory; Groeber, Michael; Uchic, Michael; Riley, Matthew; Shah, Megna; Srinivasan, Raghavan; Grandhi, Ramana
The choice of spatial resolution for experimentally-collected 3D microstructural data is often governed by general rules of thumb. For example, serial section experiments often strive to collect at least ten sections through the average feature-of-interest. However, the desire to collect high resolution data in 3D is greatly tempered by the exponential growth in collection times and data storage requirements. This paper explores the use of systematic down-sampling of synthetically-generated grain microstructures to examine the effect of resolution on the calculated distributions of microstructural descriptors such as grain size, number of nearest neighbors, aspect ratio, and Ω3.
Jędrak, Jakub; Ochab-Marcinek, Anna
2016-09-01
We study a stochastic model of gene expression, in which protein production has a form of random bursts whose size distribution is arbitrary, whereas protein decay is a first-order reaction. We find exact analytical expressions for the time evolution of the cumulant-generating function for the most general case when both the burst size probability distribution and the model parameters depend on time in an arbitrary (e.g., oscillatory) manner, and for arbitrary initial conditions. We show that in the case of periodic external activation and constant protein degradation rate, the response of the gene is analogous to the resistor-capacitor low-pass filter, where slow oscillations of the external driving have a greater effect on gene expression than the fast ones. We also demonstrate that the nth cumulant of the protein number distribution depends on the nth moment of the burst size distribution. We use these results to show that different measures of noise (coefficient of variation, Fano factor, fractional change of variance) may vary in time in a different manner. Therefore, any biological hypothesis of evolutionary optimization based on the nonmonotonic dependence of a chosen measure of noise on time must justify why it assumes that biological evolution quantifies noise in that particular way. Finally, we show that not only for exponentially distributed burst sizes but also for a wider class of burst size distributions (e.g., Dirac delta and gamma) the control of gene expression level by burst frequency modulation gives rise to proportional scaling of variance of the protein number distribution to its mean, whereas the control by amplitude modulation implies proportionality of protein number variance to the mean squared.
On the Number of Non-equivalent Ancestral Configurations for Matching Gene Trees and Species Trees.
Disanto, Filippo; Rosenberg, Noah A
2017-09-14
An ancestral configuration is one of the combinatorially distinct sets of gene lineages that, for a given gene tree, can reach a given node of a specified species tree. Ancestral configurations have appeared in recursive algebraic computations of the conditional probability that a gene tree topology is produced under the multispecies coalescent model for a given species tree. For matching gene trees and species trees, we study the number of ancestral configurations, considered up to an equivalence relation introduced by Wu (Evolution 66:763-775, 2012) to reduce the complexity of the recursive probability computation. We examine the largest number of non-equivalent ancestral configurations possible for a given tree size n. Whereas the smallest number of non-equivalent ancestral configurations increases polynomially with n, we show that the largest number increases with [Formula: see text], where k is a constant that satisfies [Formula: see text]. Under a uniform distribution on the set of binary labeled trees with a given size n, the mean number of non-equivalent ancestral configurations grows exponentially with n. The results refine an earlier analysis of the number of ancestral configurations considered without applying the equivalence relation, showing that use of the equivalence relation does not alter the exponential nature of the increase with tree size.
Reducing financial avalanches by random investments
NASA Astrophysics Data System (ADS)
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk
2013-12-01
Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
Reducing financial avalanches by random investments.
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk
2013-12-01
Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
NASA Astrophysics Data System (ADS)
Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.
2010-12-01
Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
Fleet Sizing of Automated Material Handling Using Simulation Approach
NASA Astrophysics Data System (ADS)
Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny
2018-03-01
Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software
The perturbed compound Poisson risk model with constant interest and a threshold dividend strategy
NASA Astrophysics Data System (ADS)
Gao, Shan; Liu, Zaiming
2010-03-01
In this paper, we consider the compound Poisson risk model perturbed by diffusion with constant interest and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the nth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case that the claim size distribution is exponential is considered in some detail.
Quodbach, Julian; Kleinebudde, Peter
2014-11-01
The aim of this study is the introduction of a novel apparatus that is capable of continuously measuring the particle size reduction of disintegrating tablets and analysis of the obtained results. The apparatus is constructed such that no particles pass directly through the pumping system. Thereby, the overall energy input into the particle suspension is reduced, and continuous measurement is possible without rapid destruction of the generated particles. The detected particle sizes at the beginning and at the end of the measurement differ greatly, depending on the applied disintegrant. The median particle sizes at the end of the measurement vary between 621.5 and 178.0 μm for different disintegrants. It is demonstrated that the particle size reduction follows an exponential function and that the fit parameters can be used to describe the disintegration behavior. A strong correlation between the median particle size of crospovidone disintegrants and generated particle size of the tablets is observed. This could be due to a more homogeneous distribution of the disintegrant particles in the tablets. Similar trends are observed for sodium starch glycolate and croscarmellose sodium. The new apparatus provides an innovative method to describe disintegrant effectiveness and efficiency. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Exponentiated power Lindley distribution.
Ashour, Samir K; Eltehiwy, Mahmoud A
2015-11-01
A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.
Microgels: Structure, Dynamics, and Possible Applications.
NASA Astrophysics Data System (ADS)
McKenna, John; Streletzky, Kiril
2007-03-01
We cross-linked Hydropxypropylcellulose (HPC) polymer chains to produce microgel nanoparticles and studied their structure and dynamics using Dynamic Light Scattering spectroscopy. The complex nature of the fluid and large size distribution of the particles renders typical characterization algorithm CONTIN ineffective and inconsistent. Instead, the particles spectra have been fit to a sum of stretched exponentials. Each term offers three parameters for analysis and represents a single mode. The results of this analysis show that the microgels undergo a transition to a fewer modes around 41C. The CONTIN size distribution analysis shows similar results, but these come with much less consistency and resolution. Our experiments prove that microgel particles shrink under volume phase transition. The shrinkage is reversible and depends on the amount of cross-linker, salt and polymer concentrations and rate of heating. Reversibility of microgel volume phase transition property might be particularly useful for a controlled drug delivery and release.
Evolution of Pinatubo aerosol near 19 km altitude over western North America
NASA Technical Reports Server (NTRS)
Goodman, Jindra; Snetsinger, K. G.; Pueschel, R. F.; Ferry, G. V.; Verma, S.
1994-01-01
Stratospheric aerosols, collected near 19 km altitude on wire impactors over western North America from August 20, 1991 to May 11, 1993, show strong influence of the June 1991 Mt. Pinatubo eruption. Lognormal size distributions are bimodal; each of the mode radii increases and reaches maximum value at about 15 months after eruption. The second (large particle) mode becomes well developed then, and about 40% of the droplets are larger than 0.4 micron radius. The eruption of Mt. Spurr (Alaska) may also have contributed to this. Sulfate mass loading decays exponentially (e-folding 216 days), similar to El Chichon. Silicates are present in samples only immediately after eruption. Two years after eruption, sulfate mass loading is about 0.4 micrograms/cu m, about an order of magnitude higher than background pre-volcanic values. Aerosol size distributions are still bimodal with a very well-defined large droplet mode.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
Zhang, Guodong; Zeng, Zhigang; Hu, Junhao
2018-01-01
This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
He, Xiaozhou; Wang, Yin; Tong, Penger
2018-05-01
Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.
Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce
2010-01-01
Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Several studies have shown that total depressive symptom scores in the general population approximate an exponential pattern, except for the lower end of the distribution. The Center for Epidemiologic Studies Depression Scale (CES-D) consists of 20 items, each of which may take on four scores: “rarely,” “some,” “occasionally,” and “most of the time.” Recently, we reported that the item responses for 16 negative affect items commonly exhibit exponential patterns, except for the level of “rarely,” leading us to hypothesize that the item responses at the level of “rarely” may be related to the non-exponential pattern typical of the lower end of the distribution. To verify this hypothesis, we investigated how the item responses contribute to the distribution of the sum of the item scores. Methods Data collected from 21,040 subjects who had completed the CES-D questionnaire as part of a Japanese national survey were analyzed. To assess the item responses of negative affect items, we used a parameter r, which denotes the ratio of “rarely” to “some” in each item response. The distributions of the sum of negative affect items in various combinations were analyzed using log-normal scales and curve fitting. Results The sum of the item scores approximated an exponential pattern regardless of the combination of items, whereas, at the lower end of the distributions, there was a clear divergence between the actual data and the predicted exponential pattern. At the lower end of the distributions, the sum of the item scores with high values of r exhibited higher scores compared to those predicted from the exponential pattern, whereas the sum of the item scores with low values of r exhibited lower scores compared to those predicted. Conclusions The distributional pattern of the sum of the item scores could be predicted from the item responses of such items. PMID:27806132
Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B
2013-03-01
Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.
Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain
2012-01-01
Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053
Formation of microbeads during vapor explosions of Field's metal in water
NASA Astrophysics Data System (ADS)
Kouraytem, N.; Li, E. Q.; Thoroddsen, S. T.
2016-06-01
We use high-speed video imaging to investigate vapor explosions during the impact of a molten Field's metal drop onto a pool of water. These explosions occur for temperatures above the Leidenfrost temperature and are observed to occur in up to three stages as the metal temperature is increased, with each explosion being more powerful that the preceding one. The Field's metal drop breaks up into numerous microbeads with an exponential size distribution, in contrast to tin droplets where the vapor explosion deforms the metal to form porous solid structures. We compare the characteristic bead size to the wavelength of the fastest growing mode of the Rayleigh-Taylor instability.
A Process-Based Transport-Distance Model of Aeolian Transport
NASA Astrophysics Data System (ADS)
Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.
2017-12-01
We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.
NASA Astrophysics Data System (ADS)
Verma, Arjun; Privman, Vladimir
2018-02-01
We study approach to the large-time jammed state of the deposited particles in the model of random sequential adsorption. The convergence laws are usually derived from the argument of Pomeau which includes the assumption of the dominance, at large enough times, of small landing regions into each of which only a single particle can be deposited without overlapping earlier deposited particles and which, after a certain time are no longer created by depositions in larger gaps. The second assumption has been that the size distribution of gaps open for particle-center landing in this large-time small-gaps regime is finite in the limit of zero gap size. We report numerical Monte Carlo studies of a recently introduced model of random sequential adsorption on patterned one-dimensional substrates that suggest that the second assumption must be generalized. We argue that a region exists in the parameter space of the studied model in which the gap-size distribution in the Pomeau large-time regime actually linearly vanishes at zero gap sizes. In another region, the distribution develops a threshold property, i.e., there are no small gaps below a certain gap size. We discuss the implications of these findings for new asymptotic power-law and exponential-modified-by-a-power-law convergences to jamming in irreversible one-dimensional deposition.
NASA Astrophysics Data System (ADS)
Carlsson, Philip T. M.; Zeuch, Thomas
2018-03-01
We have developed a new model utilizing our existing kinetic gas phase models to simulate experimental particle size distributions emerging in dry supersaturated H2SO4 vapor homogeneously produced by rapid oxidation of SO2 through stabilized Criegee-Intermediates from 2-butene ozonolysis. We use a sectional method for simulating the particle dynamics. The particle treatment in the model is based on first principles and takes into account the transition from the kinetic to the diffusion-limited regime. It captures the temporal evolution of size distributions at the end of the ozonolysis experiment well, noting a slight underrepresentation of coagulation effects for larger particle sizes. The model correctly predicts the shape and the modes of the experimentally observed particle size distributions. The predicted modes show an extremely high sensitivity to the H2SO4 evaporation rates of the initially formed H2SO4 clusters (dimer to pentamer), which were arbitrarily restricted to decrease exponentially with increasing cluster size. In future, the analysis presented in this work can be extended to allow a direct validation of quantum chemically predicted stabilities of small H2SO4 clusters, which are believed to initiate a significant fraction of atmospheric new particle formation events. We discuss the prospects and possible limitations of the here presented approach.
Kinetic behaviours of aggregate growth driven by time-dependent migration, birth and death
NASA Astrophysics Data System (ADS)
Zhu, Sheng-Qing; Yang, Shun-You; Ke, Jianhong; Lin, Zhenquan
2008-12-01
We propose a dynamic growth model to mimic some social phenomena, such as the evolution of cities' population, in which monomer migrations occur between any two aggregates and monomer birth/death can simultaneously occur in each aggregate. Considering the fact that the rate kernels of migration, birth and death processes may change with time, we assume that the migration rate kernel is ijf(t), and the self-birth and death rate kernels are ig1(t) and ig2(t), respectively. Based on the mean-field rate equation, we obtain the exact solution of this model and then discuss semi-quantitatively the scaling behaviour of the aggregate size distribution at large times. The results show that in the long-time limit, (i) if ∫t0g1(t') dt'/∫t0g2(t') dt' >= 1 or exp{∫t0[g2(t') - g1(t')] dt'}/∫t0f(t') dt' → 0, the aggregate size distribution ak(t) can obey a generalized scaling form; (ii) if ∫t0g1(t') dt'/∫t0g2(t') dt' → 0 and exp ∫t0[g2(t') - g1(t') dt'/∫t0f(t') dt' → ∞, ak(t) can take a scale-free form and decay exponentially in size k; (iii) ak(t) will satisfy a modified scaling law in the remaining cases. Moreover, the total mass of aggregates depends strongly on the net birth rate g1(t) - g2(t) and evolves exponentially as exp{∫t0[g1(t') - g2(t')] dt'}, which is in qualitative agreement with the evolution of the total population of a country in real world.
A short note on the maximal point-biserial correlation under non-normality.
Cheng, Ying; Liu, Haiyan
2016-11-01
The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.
Statistical theory on the analytical form of cloud particle size distributions
NASA Astrophysics Data System (ADS)
Wu, Wei; McFarquhar, Greg
2017-11-01
Several analytical forms of cloud particle size distributions (PSDs) have been used in numerical modeling and remote sensing retrieval studies of clouds and precipitation, including exponential, gamma, lognormal, and Weibull distributions. However, there is no satisfying physical explanation as to why certain distribution forms preferentially occur instead of others. Theoretically, the analytical form of a PSD can be derived by directly solving the general dynamic equation, but no analytical solutions have been found yet. Instead of using a process level approach, the use of the principle of maximum entropy (MaxEnt) for determining the analytical form of PSDs from the perspective of system is examined here. Here, the issue of variability under coordinate transformations that arises using the Gibbs/Shannon definition of entropy is identified, and the use of the concept of relative entropy to avoid these problems is discussed. Focusing on cloud physics, the four-parameter generalized gamma distribution is proposed as the analytical form of a PSD using the principle of maximum (relative) entropy with assumptions on power law relations between state variables, scale invariance and a further constraint on the expectation of one state variable (e.g. bulk water mass). DOE ASR.
NASA Astrophysics Data System (ADS)
Zhao, Pei; Shao, Ming-an; Horton, Robert
2011-02-01
Soil particle-size distributions (PSD) have been used to estimate soil hydraulic properties. Various parametric PSD models have been proposed to describe the soil PSD from sparse experimental data. It is important to determine which PSD model best represents specific soils. Fourteen PSD models were examined in order to determine the best model for representing the deposited soils adjacent to dams in the China Loess Plateau; these were: Skaggs (S-1, S-2, and S-3), fractal (FR), Jaky (J), Lima and Silva (LS), Morgan (M), Gompertz (G), logarithm (L), exponential (E), log-exponential (LE), Weibull (W), van Genuchten type (VG) as well as Fredlund (F) models. Four-hundred and eighty samples were obtained from soils deposited in the Liudaogou catchment. The coefficient of determination (R 2), the Akaike's information criterion (AIC), and the modified AIC (mAIC) were used. Based upon R 2 and AIC, the three- and four-parameter models were both good at describing the PSDs of deposited soils, and the LE, FR, and E models were the poorest. However, the mAIC in conjunction with R 2 and AIC results indicated that the W model was optimum for describing PSD of the deposited soils for emphasizing the effect of parameter number. This analysis was also helpful for finding out which model is the best one. Our results are applicable to the China Loess Plateau.
Variability of space climate and its extremes with successive solar cycles
NASA Astrophysics Data System (ADS)
Chapman, Sandra; Hush, Phillip; Tindale, Elisabeth; Dunlop, Malcolm; Watkins, Nicholas
2016-04-01
Auroral geomagnetic indices coupled with in situ solar wind monitors provide a comprehensive data set, spanning several solar cycles. Space climate can be considered as the distribution of space weather. We can then characterize these observations in terms of changing space climate by quantifying how the statistical properties of ensembles of these observed variables vary between different phases of the solar cycle. We first consider the AE index burst distribution. Bursts are constructed by thresholding the AE time series; the size of a burst is the sum of the excess in the time series for each time interval over which the threshold is exceeded. The distribution of burst sizes is two component with a crossover in behaviour at thresholds ≈ 1000 nT. Above this threshold, we find[1] a range over which the mean burst size is almost constant with threshold for both solar maxima and minima. The burst size distribution of the largest events has a functional form which is exponential. The relative likelihood of these large events varies from one solar maximum and minimum to the next. If the relative overall activity of a solar maximum/minimum can be estimated, these results then constrain the likelihood of extreme events of a given size for that solar maximum/minimum. We next develop and apply a methodology to quantify how the full distribution of geomagnetic indices and upstream solar wind observables are changing between and across different solar cycles. This methodology[2] estimates how different quantiles of the distribution, or equivalently, how the return times of events of a given size, are changing. [1] Hush, P., S. C. Chapman, M. W. Dunlop, and N. W. Watkins (2015), Robust statistical properties of the size of large burst events in AE, Geophys. Res. Lett.,42 doi:10.1002/2015GL066277 [2] Chapman, S. C., D. A. Stainforth, N. W. Watkins, (2013) On estimating long term local climate trends , Phil. Trans. Royal Soc., A,371 20120287 DOI:10.1098/rsta.2012.0287
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
Moser, Barry Kurt; Halabi, Susan
2013-01-01
In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661
Modeling of mixing processes: Fluids, particulates, and powders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ottino, J.M.; Hansen, S.
Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is foundmore » that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.« less
NASA Astrophysics Data System (ADS)
Burel, Maxym; Martin, Sylvain; Bonnefoy, Olivier
2017-06-01
We present the results of an experimental study on the jamming/flowing transition. A suspension of neutrally buoyant large particles flows in an horizontal rectangular duct, where an artificial restriction triggers jamming. We show that the avalanche distribution size is exponential, that is memoryless. We further demonstrate that the avalanche size diverges when the restriction size approaches a critical value and that this divergence is well described by a power law. The parameters (critical opening size and divergence velocity) are compared to literature values and show a strong similarity with others systems. Another result of this paper is the study of the influence of the particle morphology. We show that, for a moderate restriction size, the dead-zone formed right upstream of the restriction is larger for angular particles but, paradoxically, that the avalanche size is larger for polyhedra compared to spheres by at least one order of magnitude.
Dose coefficients in pediatric and adult abdominopelvic CT based on 100 patient models.
Tian, Xiaoyu; Li, Xiang; Segars, W Paul; Frush, Donald P; Paulson, Erik K; Samei, Ehsan
2013-12-21
Recent studies have shown the feasibility of estimating patient dose from a CT exam using CTDI(vol)-normalized-organ dose (denoted as h), DLP-normalized-effective dose (denoted as k), and DLP-normalized-risk index (denoted as q). However, previous studies were limited to a small number of phantom models. The purpose of this work was to provide dose coefficients (h, k, and q) across a large number of computational models covering a broad range of patient anatomy, age, size percentile, and gender. The study consisted of 100 patient computer models (age range, 0 to 78 y.o.; weight range, 2-180 kg) including 42 pediatric models (age range, 0 to 16 y.o.; weight range, 2-80 kg) and 58 adult models (age range, 18 to 78 y.o.; weight range, 57-180 kg). Multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare) were included. A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which h, k, and q were derived. The relationships between h, k, and q and patient characteristics (size, age, and gender) were ascertained. The differences in conversion coefficients across the scanners were further characterized. CTDI(vol)-normalized-organ dose (h) showed an exponential decrease with increasing patient size. For organs within the image coverage, the average differences of h across scanners were less than 15%. That value increased to 29% for organs on the periphery or outside the image coverage, and to 8% for distributed organs, respectively. The DLP-normalized-effective dose (k) decreased exponentially with increasing patient size. For a given gender, the DLP-normalized-risk index (q) showed an exponential decrease with both increasing patient size and patient age. The average differences in k and q across scanners were 8% and 10%, respectively. This study demonstrated that the knowledge of patient information and CTDIvol/DLP values may be used to estimate organ dose, effective dose, and risk index in abdominopelvic CT based on the coefficients derived from a large population of pediatric and adult patients.
Dose coefficients in pediatric and adult abdominopelvic CT based on 100 patient models
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Paulson, Erik K.; Samei, Ehsan
2013-12-01
Recent studies have shown the feasibility of estimating patient dose from a CT exam using CTDIvol-normalized-organ dose (denoted as h), DLP-normalized-effective dose (denoted as k), and DLP-normalized-risk index (denoted as q). However, previous studies were limited to a small number of phantom models. The purpose of this work was to provide dose coefficients (h, k, and q) across a large number of computational models covering a broad range of patient anatomy, age, size percentile, and gender. The study consisted of 100 patient computer models (age range, 0 to 78 y.o.; weight range, 2-180 kg) including 42 pediatric models (age range, 0 to 16 y.o.; weight range, 2-80 kg) and 58 adult models (age range, 18 to 78 y.o.; weight range, 57-180 kg). Multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare) were included. A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which h, k, and q were derived. The relationships between h, k, and q and patient characteristics (size, age, and gender) were ascertained. The differences in conversion coefficients across the scanners were further characterized. CTDIvol-normalized-organ dose (h) showed an exponential decrease with increasing patient size. For organs within the image coverage, the average differences of h across scanners were less than 15%. That value increased to 29% for organs on the periphery or outside the image coverage, and to 8% for distributed organs, respectively. The DLP-normalized-effective dose (k) decreased exponentially with increasing patient size. For a given gender, the DLP-normalized-risk index (q) showed an exponential decrease with both increasing patient size and patient age. The average differences in k and q across scanners were 8% and 10%, respectively. This study demonstrated that the knowledge of patient information and CTDIvol/DLP values may be used to estimate organ dose, effective dose, and risk index in abdominopelvic CT based on the coefficients derived from a large population of pediatric and adult patients.
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.
2017-01-01
Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560
Parabolic replicator dynamics and the principle of minimum Tsallis information gain
2013-01-01
Background Non-linear, parabolic (sub-exponential) and hyperbolic (super-exponential) models of prebiological evolution of molecular replicators have been proposed and extensively studied. The parabolic models appear to be the most realistic approximations of real-life replicator systems due primarily to product inhibition. Unlike the more traditional exponential models, the distribution of individual frequencies in an evolving parabolic population is not described by the Maximum Entropy (MaxEnt) Principle in its traditional form, whereby the distribution with the maximum Shannon entropy is chosen among all the distributions that are possible under the given constraints. We sought to identify a more general form of the MaxEnt principle that would be applicable to parabolic growth. Results We consider a model of a population that reproduces according to the parabolic growth law and show that the frequencies of individuals in the population minimize the Tsallis relative entropy (non-additive information gain) at each time moment. Next, we consider a model of a parabolically growing population that maintains a constant total size and provide an “implicit” solution for this system. We show that in this case, the frequencies of the individuals in the population also minimize the Tsallis information gain at each moment of the ‘internal time” of the population. Conclusions The results of this analysis show that the general MaxEnt principle is the underlying law for the evolution of a broad class of replicator systems including not only exponential but also parabolic and hyperbolic systems. The choice of the appropriate entropy (information) function depends on the growth dynamics of a particular class of systems. The Tsallis entropy is non-additive for independent subsystems, i.e. the information on the subsystems is insufficient to describe the system as a whole. In the context of prebiotic evolution, this “non-reductionist” nature of parabolic replicator systems might reflect the importance of group selection and competition between ensembles of cooperating replicators. Reviewers This article was reviewed by Viswanadham Sridhara (nominated by Claus Wilke), Puushottam Dixit (nominated by Sergei Maslov), and Nick Grishin. For the complete reviews, see the Reviewers’ Reports section. PMID:23937956
Venus small volcano classification and description
NASA Technical Reports Server (NTRS)
Aubele, J. C.
1993-01-01
The high resolution and global coverage of the Magellan radar image data set allows detailed study of the smallest volcanoes on the planet. A modified classification scheme for volcanoes less than 20 km in diameter is shown and described. It is based on observations of all members of the 556 significant clusters or fields of small volcanoes located and described by this author during data collection for the Magellan Volcanic and Magmatic Feature Catalog. This global study of approximately 10 exp 4 volcanoes provides new information for refining small volcano classification based on individual characteristics. Total number of these volcanoes was estimated to be 10 exp 5 to 10 exp 6 planetwide based on pre-Magellan analysis of Venera 15/16, and during preparation of the global catalog, small volcanoes were identified individually or in clusters in every C1-MIDR mosaic of the Magellan data set. Basal diameter (based on 1000 measured edifices) generally ranges from 2 to 12 km with a mode of 34 km, and follows an exponential distribution similar to the size frequency distribution of seamounts as measured from GLORIA sonar images. This is a typical distribution for most size-limited natural phenomena unlike impact craters which follow a power law distribution and continue to infinitely increase in number with decreasing size. Using an exponential distribution calculated from measured small volcanoes selected globally at random, we can calculate total number possible given a minimum size. The paucity of edifice diameters less than 2 km may be due to inability to identify very small volcanic edifices in this data set; however, summit pits are recognizable at smaller diameters, and 2 km may represent a significant minimum diameter related to style of volcanic eruption. Guest, et al, discussed four general types of small volcanic edifices on Venus: (1) small lava shields; (2) small volcanic cones; (3) small volcanic domes; and (4) scalloped margin domes ('ticks'). Steep-sided domes or 'pancake domes', larger than 20 km in diameter, were included with the small volcanic domes. For the purposes of this study, only volcanic edifices less than 20 km in diameter are discussed. This forms a convenient cutoff since most of the steep-sided domes ('pancake domes') and scalloped margin domes ('ticks') are 20 to 100 km in diameter, are much less numerous globally than are the smaller diameter volcanic edifices (2 to 3 orders of magnitude lower in total global number), and do not commonly occur in large clusters or fields of large numbers of edifices.
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang
2016-07-01
Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.
NASA Astrophysics Data System (ADS)
van de Giesen, Nicolaas; Hut, Rolf; ten Veldhuis, Marie-claire
2017-04-01
If one can assume that drop size distributions can be effectively described by a generalized gamma function [1], one can estimate this function on the basis of the distribution of time intervals between drops hitting a certain area. The arrival of a single drop is relatively easy to measure with simple consumer devices such as cameras or piezoelectric elements. Here we present an open-hardware design for the electronics and statistical processing of an intervalometer that measures time intervals between drop arrivals. The specific hardware in this case is a piezoelectric element in an appropriate housing, combined with an instrumentation op-amp and an Arduino processor. Although it would not be too difficult to simply register the arrival times of all drops, it is more practical to only report the main statistics. For this purpose, all intervals below a certain threshold during a reporting interval are summed and counted. We also sum the scaled squares, cubes, and fourth powers of the intervals. On the basis of the first four moments, one can estimate the corresponding generalized gamma function and obtain some sense of the accuracy of the underlying assumptions. Special attention is needed to determine the lower threshold of the drop sizes that can be measured. This minimum size often varies over the area being monitored, such as is the case for piezoelectric elements. We describe a simple method to determine these (distributed) minimal drop sizes and present a bootstrap method to make the necessary corrections. Reference [1] Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
In situ fragmentation and rock particle sorting on arid hills
NASA Astrophysics Data System (ADS)
McGrath, Gavan S.; Nie, Zhengyao; Dyskin, Arcady; Byrd, Tia; Jenner, Rowan; Holbeche, Georgina; Hinz, Christoph
2013-03-01
Transport processes are often proposed to explain the sorting of rock particles on arid hillslopes, where mean rock particle size often decreases in the downslope direction. Here we show that in situ fragmentation of rock particles can also produce similar patterns. A total of 93,414 rock particles were digitized from 880 photographs of the surface of three mesa hills in the Great Sandy Desert, Australia. Rock particles were characterized by the projected Feret's diameter and circularity. Distance from the duricrust cap was found to be a more robust explanatory variable for diameter than the local hillslope gradient. Mean diameter decreased exponentially downslope, while the fractional area covered by rock particles decreased linearly. Rock particle diameters were distributed lognormally, with both the location and scale parameters decreasing approximately linearly downslope. Rock particle circularity distributions showed little change; only a slight shift in the mode to more circular particles was noted to occur downslope. A dynamic fragmentation model was used to assess whether in situ weathering alone could reproduce the observed downslope fining of diameters. Modeled and observed size distributions agreed well and both displayed a preferential loss of relatively large rock particles and an apparent approach to a terminal size distribution of the rocks downslope. We show this is consistent with a size effect in material strength, where large rocks are more susceptible to fatigue failure under stress than smaller rocks. In situ fragmentation therefore produces qualitatively similar patterns to those that would be expected to arise from selective transport.
Scaling in the distribution of intertrade durations of Chinese stocks
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing
2008-10-01
The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.
2012-09-01
used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,
Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.
Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S
2016-01-01
Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
NASA Astrophysics Data System (ADS)
Jiménez, Santiago; Barroso, Jorge; Pina, Antonio; Ballester, Javier
2016-05-01
In spite of the relevance of residential heating burners in the global emission of soot particles to the atmosphere, relatively little information on their properties (concentration, size distribution) is available in the literature, and even less regarding the dependence of those properties on the operating conditions. Instead, the usual procedure to characterize those emissions is to measure the smoke opacity by several methods, among which the blackening of a paper after filtering a fixed amount of gas (Bacharach test) is predominant. In this work, the size distributions of the particles generated in the combustion of a variety of gaseous and liquid fuels in a laboratory facility equipped with commercial burners have been measured with a size classifier coupled to a particle counter in a broad range of operating conditions (air excesses), with simultaneous determination of the Bacharach index. The shape and evolution of the distribution with progressively smaller oxygen concentrations depends essentially on the state of the fuel: whereas the combustion of the gases results in monomodal distributions that 'shift' towards larger diameters, in the case of the gas-oils an ultrafine mode is always observed, and a secondary mode of coarse particle grows in relevance. In both cases, there is a strong, exponential correlation between the total mass concentration and the Bacharach opacity index, quite similar for both groups of fuels. The empirical expressions proposed may allow other researchers to at least estimate the emissions of numerous combustion facilities routinely characterized by their smoke opacities.
Not all nonnormal distributions are created equal: Improved theoretical and measurement precision.
Joo, Harry; Aguinis, Herman; Bradley, Kyle J
2017-07-01
We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Heymsfield, Gerald M.; Tian, Lin; Li, Lihua; Srivastava, C.
2005-01-01
Two techniques for retrieving the slope and intercept parameters of an assumed exponential raindrop size distribution (RSD), vertical air velocity, and attenuation by precipitation and water vapor in light stratiform rain using observations by airborne, nadir looking dual-wavelength (X-band, 3.2 cm and W-band, 3.2 mm) radars are presented. In both techniques, the slope parameter of the RSD and the vertical air velocity are retrieved using only the mean Doppler velocities at the two wavelengths. In the first method, the intercept of the RSD is estimated from the observed reflectivity at the longer wavelength assuming no attenuation at that wavelength. The attenuation of the shorter wavelength radiation by precipitation and water vapor are retrieved using the observed reflectivity at the shorter wavelength. In the second technique, it is assumed that the longer wavelength suffers attenuation only in the melting band. Then, assuming a distribution of water vapor, the melting band attenuation at both wavelengths and the rain attenuation at the shorter wavelength are retrieved. Results of the retrievals are discussed and several physically meaningful results are presented.
Beyond the power law: Uncovering stylized facts in interbank networks
NASA Astrophysics Data System (ADS)
Vandermarliere, Benjamin; Karas, Alexei; Ryckebusch, Jan; Schoors, Koen
2015-06-01
We use daily data on bilateral interbank exposures and monthly bank balance sheets to study network characteristics of the Russian interbank market over August 1998-October 2004. Specifically, we examine the distributions of (un)directed (un)weighted degree, nodal attributes (bank assets, capital and capital-to-assets ratio) and edge weights (loan size and counterparty exposure). We search for the theoretical distribution that fits the data best and report the "best" fit parameters. We observe that all studied distributions are heavy tailed. The fat tail typically contains 20% of the data and can be mostly described well by a truncated power law. Also the power law, stretched exponential and log-normal provide reasonably good fits to the tails of the data. In most cases, however, separating the bulk and tail parts of the data is hard, so we proceed to study the full range of the events. We find that the stretched exponential and the log-normal distributions fit the full range of the data best. These conclusions are robust to (1) whether we aggregate the data over a week, month, quarter or year; (2) whether we look at the "growth" versus "maturity" phases of interbank market development; and (3) with minor exceptions, whether we look at the "normal" versus "crisis" operation periods. In line with prior research, we find that the network topology changes greatly as the interbank market moves from a "normal" to a "crisis" operation period.
Nonlinear stability of the 1D Boltzmann equation in a periodic box
NASA Astrophysics Data System (ADS)
Wu, Kung-Chien
2018-05-01
We study the nonlinear stability of the Boltzmann equation in the 1D periodic box with size , where is the Knudsen number. The convergence rate is for small time region and exponential for large time region. Moreover, the exponential rate depends on the size of the domain (Knudsen number). This problem is highly nonlinear and hence we need more careful analysis to control the nonlinear term.
Socio-Economic Instability and the Scaling of Energy Use with Population Size
DeLong, John P.; Burger, Oskar
2015-01-01
The size of the human population is relevant to the development of a sustainable world, yet the forces setting growth or declines in the human population are poorly understood. Generally, population growth rates depend on whether new individuals compete for the same energy (leading to Malthusian or density-dependent growth) or help to generate new energy (leading to exponential and super-exponential growth). It has been hypothesized that exponential and super-exponential growth in humans has resulted from carrying capacity, which is in part determined by energy availability, keeping pace with or exceeding the rate of population growth. We evaluated the relationship between energy use and population size for countries with long records of both and the world as a whole to assess whether energy yields are consistent with the idea of an increasing carrying capacity. We find that on average energy use has indeed kept pace with population size over long time periods. We also show, however, that the energy-population scaling exponent plummets during, and its temporal variability increases preceding, periods of social, political, technological, and environmental change. We suggest that efforts to increase the reliability of future energy yields may be essential for stabilizing both population growth and the global socio-economic system. PMID:26091499
An allometric scaling relation based on logistic growth of cities
NASA Astrophysics Data System (ADS)
Chen, Yanguang
2014-08-01
The relationships between urban area and population size have been empirically demonstrated to follow the scaling law of allometric growth. This allometric scaling is based on exponential growth of city size and can be termed "exponential allometry", which is associated with the concepts of fractals. However, both city population and urban area comply with the course of logistic growth rather than exponential growth. In this paper, I will present a new allometric scaling based on logistic growth to solve the abovementioned problem. The logistic growth is a process of replacement dynamics. Defining a pair of replacement quotients as new measurements, which are functions of urban area and population, we can derive an allometric scaling relation from the logistic processes of urban growth, which can be termed "logistic allometry". The exponential allometric relation between urban area and population is the approximate expression of the logistic allometric equation when the city size is not large enough. The proper range of the allometric scaling exponent value is reconsidered through the logistic process. Then, a medium-sized city of Henan Province, China, is employed as an example to validate the new allometric relation. The logistic allometry is helpful for further understanding the fractal property and self-organized process of urban evolution in the right perspective.
Socio-Economic Instability and the Scaling of Energy Use with Population Size.
DeLong, John P; Burger, Oskar
2015-01-01
The size of the human population is relevant to the development of a sustainable world, yet the forces setting growth or declines in the human population are poorly understood. Generally, population growth rates depend on whether new individuals compete for the same energy (leading to Malthusian or density-dependent growth) or help to generate new energy (leading to exponential and super-exponential growth). It has been hypothesized that exponential and super-exponential growth in humans has resulted from carrying capacity, which is in part determined by energy availability, keeping pace with or exceeding the rate of population growth. We evaluated the relationship between energy use and population size for countries with long records of both and the world as a whole to assess whether energy yields are consistent with the idea of an increasing carrying capacity. We find that on average energy use has indeed kept pace with population size over long time periods. We also show, however, that the energy-population scaling exponent plummets during, and its temporal variability increases preceding, periods of social, political, technological, and environmental change. We suggest that efforts to increase the reliability of future energy yields may be essential for stabilizing both population growth and the global socio-economic system.
Observations of sea ice ridging in the Weddell Sea
NASA Astrophysics Data System (ADS)
Granberg, Hardy B.; Leppaäranta, Matti
1999-11-01
Sea ice surface topography data were obtained by helicopter-borne laser profiling during the First Finnish Antarctic Expedition (FINNARP-89). The measurements were made near the ice margin at about 73°S, 27°W in the eastern Weddell Sea on December 31, 1989, and January 1, 1990. Five transects, ranging in length from 127 to 163 km and covering a total length of 724 km, are analyzed. With a lower cutoff of 0.91 m the overall ridge frequency was 8.4 ridges/km and the average ridge height was 1.32 m. The spatial variations in ridging were large; for 36 individual 20-km segments the frequencies were 2-16 ridges/km and the mean heights were 1.16-1.56 m. The frequencies and mean heights were weakly correlated. The distributions of the ridge heights followed the exponential distribution; the spacings did not pass tests for either the exponential or the lognormal distribution, but the latter was much closer. In the 20-km segments the areally averaged thickness of ridged ice was 0.51±0.28 m, ranging from 0.10 to 1.15 m. The observed ridge size and frequency are greater than those known for the Ross Sea. Compared with the central Arctic, the Weddell Sea ridging frequencies are similar but the ridge heights are smaller, possibly as a result of differences in snow accumulation.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Conditional optimal spacing in exponential distribution.
Park, Sangun
2006-12-01
In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.
NASA Astrophysics Data System (ADS)
Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung
2007-07-01
This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.
An improved rainfall disaggregation technique for GCMs
NASA Astrophysics Data System (ADS)
Onof, C.; Mackay, N. G.; Oh, L.; Wheater, H. S.
1998-08-01
Meteorological models represent rainfall as a mean value for a grid square so that when the latter is large, a disaggregation scheme is required to represent the spatial variability of rainfall. In general circulation models (GCMs) this is based on an assumption of exponentiality of rainfall intensities and a fixed value of areal rainfall coverage, dependent on rainfall type. This paper examines these two assumptions on the basis of U.K. and U.S. radar data. Firstly, the coverage of an area is strongly dependent on its size, and this dependence exhibits a scaling law over a range of sizes. Secondly, the coverage is, of course, dependent on the resolution at which it is measured, although this dependence is weak at high resolutions. Thirdly, the time series of rainfall coverages has a long-tailed autocorrelation function which is comparable to that of the mean areal rainfalls. It is therefore possible to reproduce much of the temporal dependence of coverages by using a regression of the log of the mean rainfall on the log of the coverage. The exponential assumption is satisfactory in many cases but not able to reproduce some of the long-tailed dependence of some intensity distributions. Gamma and lognormal distributions provide a better fit in these cases, but they have their shortcomings and require a second parameter. An improved disaggregation scheme for GCMs is proposed which incorporates the previous findings to allow the coverage to be obtained for any area and any mean rainfall intensity. The parameters required are given and some of their seasonal behavior is analyzed.
Kinetics of a Migration-Driven Aggregation-Fragmentation Process
NASA Astrophysics Data System (ADS)
Zhuang, You-Yi; Lin, Zhen-Quan; Ke, Jian-Hong
2003-08-01
We propose a reversible model of the migration-driven aggregation-fragmentation process with the symmetric migration rate kernels K(k;j)=K^'(k;j)=λ kj^v and the constant aggregation rates I1, I2 and fragmentation rates J1, J2. Based on the mean-field theory, we investigate the evolution behavior of the aggregate size distributions in several cases with different values of index υ. We find that the fragmentation reaction plays a more important role in the kinetic behaviors of the system than the aggregation and migration. When J1=0 and J2 =0, the aggregate size distributions ak(t) and bk(t) obey the conventional scaling law, while when J1>0 and J2>0, they obey the modified scaling law with an exponential scaling function. The total mass of either species remains conserved. The project supported by National Natural Science Foundation of China under Grant Nos. 10275048 and 10175008, and Natural Science Foundation of Zhejiang Province of China under Grant No. 102067
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Salje, Ekhard K H; Planes, Antoni; Vives, Eduard
2017-10-01
Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.
Xiang, T X
1993-01-01
A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390
The shock waves in decaying supersonic turbulence
NASA Astrophysics Data System (ADS)
Smith, M. D.; Mac Low, M.-M.; Zuev, J. M.
2000-04-01
We here analyse numerical simulations of supersonic, hypersonic and magnetohydrodynamic turbulence that is free to decay. Our goals are to understand the dynamics of the decay and the characteristic properties of the shock waves produced. This will be useful for interpretation of observations of both motions in molecular clouds and sources of non-thermal radiation. We find that decaying hypersonic turbulence possesses an exponential tail of fast shocks and an exponential decay in time, i.e. the number of shocks is proportional to t exp (-ktv) for shock velocity jump v and mean initial wavenumber k. In contrast to the velocity gradients, the velocity Probability Distribution Function remains Gaussian with a more complex decay law. The energy is dissipated not by fast shocks but by a large number of low Mach number shocks. The power loss peaks near a low-speed turn-over in an exponential distribution. An analytical extension of the mapping closure technique is able to predict the basic decay features. Our analytic description of the distribution of shock strengths should prove useful for direct modeling of observable emission. We note that an exponential distribution of shocks such as we find will, in general, generate very low excitation shock signatures.
Universal patterns of inequality
NASA Astrophysics Data System (ADS)
Banerjee, Anand; Yakovenko, Victor M.
2010-07-01
Probability distributions of money, income and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced. The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for the USA. We show that the increase in income inequality in the USA originates primarily from the increase in the income fraction going to the upper tail, which now exceeds 20% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000 and 2005, we discuss the effect of globalization on the inequality of energy consumption.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
Observed microphysical structure of nimbostratus in northeast cold vortex over China
NASA Astrophysics Data System (ADS)
Zhao, Zhen; Lei, Hengchi
2014-06-01
Airborne measurements were collected during a stepwise ascent within a nimbostratus cloud associated with a northeast cold vortex in Jilin Province over China on 20 June 2005 to study cloud structure and ice particle spectra. The microphysical structure of the nimbostratus was elucidated by King liquid water probe and Particle Measuring Systems (PMS) probes aboard the research aircraft. The PMS 2D images provide detailed information on crystal habits. A thick layer of supercooled cloud is observed and Hallett-Mossop ice multiplication process is used to explain very high ice particle concentrations in the temperature region between - 3 °C and - 6 °C. From near cloud top to melting layer, ice crystals shape in the form of columns, needles, aggregations and plates. In addition, significant horizontal variability was evident on the scale of few hundred meters. Ice particle spectra in this cloud were adequately described by exponential relationships. Relationship between the intercept (N0) and slope (λ) parameters of an exponential size distribution was well characterized by a power law.
A fractal process of hydrogen diffusion in a-Si:H with exponential energy distribution
NASA Astrophysics Data System (ADS)
Hikita, Harumi; Ishikawa, Hirohisa; Morigaki, Kazuo
2017-04-01
Hydrogen diffusion in a-Si:H with exponential distribution of the states in energy exhibits the fractal structure. It is shown that a probability P(t) of the pausing time t has a form of tα (α: fractal dimension). It is shown that the fractal dimension α = Tr/T0 (Tr: hydrogen temperature, T0: a temperature corresponding to the width of exponential distribution of the states in energy) is in agreement with the Hausdorff dimension. A fractal graph for the case of α ≤ 1 is like the Cantor set. A fractal graph for the case of α > 1 is like the Koch curves. At α = ∞, hydrogen migration exhibits Brownian motion. Hydrogen diffusion in a-Si:H should be the fractal process.
Photocounting distributions for exponentially decaying sources.
Teich, M C; Card, H C
1979-05-01
Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.
Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models
Kougioumtzoglou, Ioannis A.; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.
2016-01-01
Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A ‘stochastic instability’ (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
The Experiment of the Clog Reduction in a Plane Silo
NASA Astrophysics Data System (ADS)
Sun, Ai-Le; Zhang, Jie
2017-06-01
The flow of particles may be clogged when they pass through a narrow orifice. Many factors can change the probability of clogging, such as the outlet size, the presence of obstacles and external perturbation, but the detailed mechanisms are still unclear. In this paper, we present an experimental study of reduction of the clogging probability in a horizontal plane silo, which consists of a layer of elastic particles transported on an annular flat plate rotating with a constant angular velocity passing through a hopper structure. We found the exponential distributions of the avalanche size for different sizes of orifice and the power law tails of the passing time between two particles. We did not confirm whether there was a critical size of orifice above which the clogging became impossible. We explored the effect of the obstacle on the probability of clogging: and if we chose a proper obstacle placed at a proper position, the probability of clogging could be reduced by a factor of about seven.
Performance and state-space analyses of systems using Petri nets
NASA Technical Reports Server (NTRS)
Watson, James Francis, III
1992-01-01
The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.
Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.
2015-01-01
This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857
Velocity distributions of granular gases with drag and with long-range interactions.
Kohlstedt, K; Snezhko, A; Sapozhnikov, M V; Aranson, I S; Olafsen, J S; Ben-Naim, E
2005-08-05
We study velocity statistics of electrostatically driven granular gases. For two different experiments, (i) nonmagnetic particles in a viscous fluid and (ii) magnetic particles in air, the velocity distribution is non-Maxwellian, and its high-energy tail is exponential, P(upsilon) approximately exp(-/upsilon/). This behavior is consistent with the kinetic theory of driven dissipative particles. For particles immersed in a fluid, viscous damping is responsible for the exponential tail, while for magnetic particles, long-range interactions cause the exponential tail. We conclude that velocity statistics of dissipative gases are sensitive to the fluid environment and to the form of the particle interaction.
Evolution of resistance and progression to disease during clonal expansion of cancer.
Durrett, Richard; Moseley, Stephen
2010-02-01
Inspired by previous work of Iwasa et al. (2006) and Haeno et al. (2007), we consider an exponentially growing population of cancerous cells that will evolve resistance to treatment after one mutation or display a disease phenotype after two or more mutations. We prove results about the distribution of the first time when k mutations have accumulated in some cell, and about the growth of the number of type-k cells. We show that our results can be used to derive the previous results about a tumor grown to a fixed size. Copyright 2009 Elsevier Inc. All rights reserved.
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi
2003-02-01
We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.
NASA Astrophysics Data System (ADS)
Gergely, Mathias; Cooper, Steven J.; Garrett, Timothy J.
2017-10-01
The snowflake microstructure determines the microwave scattering properties of individual snowflakes and has a strong impact on snowfall radar signatures. In this study, individual snowflakes are represented by collections of randomly distributed ice spheres where the size and number of the constituent ice spheres are specified by the snowflake mass and surface-area-to-volume ratio (SAV) and the bounding volume of each ice sphere collection is given by the snowflake maximum dimension. Radar backscatter cross sections for the ice sphere collections are calculated at X-, Ku-, Ka-, and W-band frequencies and then used to model triple-frequency radar signatures for exponential snowflake size distributions (SSDs). Additionally, snowflake complexity values obtained from high-resolution multi-view snowflake images are used as an indicator of snowflake SAV to derive snowfall triple-frequency radar signatures. The modeled snowfall triple-frequency radar signatures cover a wide range of triple-frequency signatures that were previously determined from radar reflectivity measurements and illustrate characteristic differences related to snow type, quantified through snowflake SAV, and snowflake size. The results show high sensitivity to snowflake SAV and SSD maximum size but are generally less affected by uncertainties in the parameterization of snowflake mass, indicating the importance of snowflake SAV for the interpretation of snowfall triple-frequency radar signatures.
NASA Astrophysics Data System (ADS)
Sumitomo, Yoske; Tye, S.-H. Henry; Wong, Sam S. C.
2013-07-01
We study a racetrack model in the presence of the leading α'-correction in flux compactification in Type IIB string theory, for the purpose of getting conceivable de-Sitter vacua in the large compactified volume approximation. Unlike the Kähler Uplift model studied previously, the α'-correction is more controllable for the meta-stable de-Sitter vacua in the racetrack case since the constraint on the compactified volume size is very much relaxed. We find that the vacuum energy density Λ for de-Sitter vacua approaches zero exponentially as the volume grows. We also analyze properties of the probability distribution of Λ in this class of models. As in other cases studied earlier, the probability distribution again peaks sharply at Λ = 0. We also study the Racetrack Kähler Uplift model in the Swiss-Cheese type model.
Resistance distribution in the hopping percolation model.
Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad
2005-07-01
We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.
Crowding Effects in Vehicular Traffic
Combinido, Jay Samuel L.; Lim, May T.
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762
A spatial scan statistic for survival data based on Weibull distribution.
Bhatt, Vijaya; Tiwari, Neeraj
2014-05-20
The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.
The Area Coverage of Geophysical Fields as a Function of Sensor Field-of View
NASA Technical Reports Server (NTRS)
Key, Jeffrey R.
1994-01-01
In many remote sensing studies of geophysical fields such as clouds, land cover, or sea ice characteristics, the fractional area coverage of the field in an image is estimated as the proportion of pixels that have the characteristic of interest (i.e., are part of the field) as determined by some thresholding operation. The effect of sensor field-of-view on this estimate is examined by modeling the unknown distribution of subpixel area fraction with the beta distribution, whose two parameters depend upon the true fractional area coverage, the pixel size, and the spatial structure of the geophysical field. Since it is often not possible to relate digital number, reflectance, or temperature to subpixel area fraction, the statistical models described are used to determine the effect of pixel size and thresholding operations on the estimate of area fraction for hypothetical geophysical fields. Examples are given for simulated cumuliform clouds and linear openings in sea ice, whose spatial structures are described by an exponential autocovariance function. It is shown that the rate and direction of change in total area fraction with changing pixel size depends on the true area fraction, the spatial structure, and the thresholding operation used.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Development of a methodology to evaluate material accountability in pyroprocess
NASA Astrophysics Data System (ADS)
Woo, Seungmin
This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Exponential Boundary Observers for Pressurized Water Pipe
NASA Astrophysics Data System (ADS)
Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel
2015-11-01
This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.
Particle shape inhomogeneity and plasmon-band broadening of solar-control LaB6 nanoparticles
NASA Astrophysics Data System (ADS)
Machida, Keisuke; Adachi, Kenji
2015-07-01
An ensemble inhomogeneity of non-spherical LaB6 nanoparticles dispersion has been analyzed with Mie theory to account for the observed broad plasmon band. LaB6 particle shape has been characterized using small-angle X-ray scattering (SAXS) and electron tomography (ET). SAXS scattering intensity is found to vary exponentially with exponent -3.10, indicating the particle shape of disk toward sphere. ET analysis disclosed dually grouped distribution of nanoparticle dispersion; one is large-sized at small aspect ratio and the other is small-sized with scattered high aspect ratio, reflecting the dual fragmentation modes during the milling process. Mie extinction calculations have been integrated for 100 000 particles of varying aspect ratio, which were produced randomly by using the Box-Muller method. The Mie integration method has produced a broad and smooth absorption band expanded towards low energy, in remarkable agreement with experimental profiles by assuming a SAXS- and ET-derived shape distribution, i.e., a majority of disks with a little incorporation of rods and spheres for the ensemble. The analysis envisages a high potential of LaB6 with further-increased visible transparency and plasmon peak upon controlled particle-shape and its distribution.
Extinction risk and structure of a food web model
NASA Astrophysics Data System (ADS)
Pękalski, Andrzej; Szwabiński, Janusz; Bena, Ioana; Droz, Michel
2008-03-01
We investigate in detail the model of a trophic web proposed by Amaral and Meyer [Phys. Rev. Lett. 82, 652 (1999)]. We focus on small-size systems that are relevant for real biological food webs and for which the fluctuations play an important role. We show, using Monte Carlo simulations, that such webs can be nonviable, leading to extinction of all species in small and/or weakly coupled systems. Estimations of the extinction times and survival chances are also given. We show that before the extinction the fraction of highly connected species (“omnivores”) is increasing. Viable food webs exhibit a pyramidal structure, where the density of occupied niches is higher at lower trophic levels, and moreover the occupations of adjacent levels are closely correlated. We also demonstrate that the distribution of the lengths of food chains has an exponential character and changes weakly with the parameters of the model. On the contrary, the distribution of avalanche sizes of the extinct species depends strongly on the connectedness of the web. For rather loosely connected systems, we recover the power-law type of behavior with the same exponent as found in earlier studies, while for densely connected webs the distribution is not of a power-law type.
An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood
NASA Astrophysics Data System (ADS)
Dinh, Khanh N.; Sidje, Roger B.
2017-12-01
Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.
The coherent interlayer resistance of a single, rotated interface between two stacks of AB graphite
NASA Astrophysics Data System (ADS)
Habib, K. M. Masum; Sylvia, Somaia S.; Ge, Supeng; Neupane, Mahesh; Lake, Roger K.
2013-12-01
The coherent, interlayer resistance of a misoriented, rotated interface between two stacks of AB graphite is determined for a variety of misorientation angles. The quantum-resistance of the ideal AB stack is on the order of 1 to 10 mΩ μm2. For small rotation angles, the coherent interlayer resistance exponentially approaches the ideal quantum resistance at energies away from the charge neutrality point. Over a range of intermediate angles, the resistance increases exponentially with cell size for minimum size unit cells. Larger cell sizes, of similar angles, may not follow this trend. The energy dependence of the interlayer transmission is described.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Drude weight fluctuations in many-body localized systems
NASA Astrophysics Data System (ADS)
Filippone, Michele; Brouwer, Piet W.; Eisert, Jens; von Oppen, Felix
2016-11-01
We numerically investigate the distribution of Drude weights D of many-body states in disordered one-dimensional interacting electron systems across the transition to a many-body localized phase. Drude weights are proportional to the spectral curvatures induced by magnetic fluxes in mesoscopic rings. They offer a method to relate the transition to the many-body localized phase to transport properties. In the delocalized regime, we find that the Drude weight distribution at a fixed disorder configuration agrees well with the random-matrix-theory prediction P (D ) ∝(γ2+D2) -3 /2 , although the distribution width γ strongly fluctuates between disorder realizations. A crossover is observed towards a distribution with different large-D asymptotics deep in the many-body localized phase, which however differs from the commonly expected Cauchy distribution. We show that the average distribution width <γ >, rescaled by L Δ ,Δ being the average level spacing in the middle of the spectrum and L the systems size, is an efficient probe of the many-body localization transition, as it increases (vanishes) exponentially in the delocalized (localized) phase.
NASA Astrophysics Data System (ADS)
Buldyrev, S. V.; Pammolli, F.; Riccaboni, M.; Yamasaki, K.; Fu, D.-F.; Matia, K.; Stanley, H. E.
2007-05-01
We present a preferential attachment growth model to obtain the distribution P(K) of number of units K in the classes which may represent business firms or other socio-economic entities. We found that P(K) is described in its central part by a power law with an exponent ϕ = 2+b/(1-b) which depends on the probability of entry of new classes, b. In a particular problem of city population this distribution is equivalent to the well known Zipf law. In the absence of the new classes entry, the distribution P(K) is exponential. Using analytical form of P(K) and assuming proportional growth for units, we derive P(g), the distribution of business firm growth rates. The model predicts that P(g) has a Laplacian cusp in the central part and asymptotic power-law tails with an exponent ζ = 3. We test the analytical expressions derived using heuristic arguments by simulations. The model might also explain the size-variance relationship of the firm growth rates.
Observation of arrival times of EAS with energies or = 6 x 10 (14) eV
NASA Technical Reports Server (NTRS)
Sun, L.
1985-01-01
The Earth's atmosphere is continually being bombarded by primary cosmic ray particles which are generally believed to be high-energy nuclei. The fact that the majority of cosmic ray primaries are charged particles and that space is permeated with random magnetic fields, means that the particles do not travel in straight lines. The arrival time distribution of EAS may also transfer some information about the primary particles. Actually, if the particles come to our Earth in a completely random process, the arrival time distribution of pairs of successive particles should fit an exponential law. The work reported here was arried out at Sydney University from May 1982 to January 1983. All the data are used to plot the arrival-time distribution of the events, that is, the distribution of time-separation between consecutive events on a 1 minute bin size. During this period more than 2300 showers were recorded. The results are discussed and compared with that of some other experiments.
1/f oscillations in a model of moth populations oriented by diffusive pheromones
NASA Astrophysics Data System (ADS)
Barbosa, L. A.; Martins, M. L.; Lima, E. R.
2005-01-01
An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.
NASA Astrophysics Data System (ADS)
Milledge, David; Bellugi, Dino; McKean, Jim; Dietrich, William E.
2013-04-01
Current practice in regional-scale shallow landslide hazard assessment is to adopt a one-dimensional slope stability representation. Such a representation cannot produce discrete landslides and thus cannot make predictions on landslide size. Furthermore, one-dimensional approaches cannot include lateral effects, which are known to be important in defining instability. Here we derive an alternative model that accounts for lateral resistance by representing the forces acting on each margin of an unstable block of soil. We model boundary frictional resistances using 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure, using the log-spiral method, on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion declines exponentially with soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are relatively well constrained and find that our model predicts failure at the observed location and predicts that larger or smaller failures conformal to the observed shape are indeed more stable. We use a sensitivity analysis of the model to show that lateral reinforcement sets a minimum landslide size, and that the additional strength at the downslope boundary results in optimal shapes that are longer in the downslope direction. However, reinforcement effects alone cannot fully explain the size or shape distributions of observed landslides, highlighting the importance of the spatial pattern of key parameters (e.g. pore water pressure and soil depth) at the watershed scale. The application of the model at this scale requires an efficient method to find unstable shapes among an exponential number of candidates. In this context, the model allows a more extensive examination of the controls on landslide size, shape and location.
NASA Astrophysics Data System (ADS)
Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.
2016-01-01
Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.
A study on some urban bus transport networks
NASA Astrophysics Data System (ADS)
Chen, Yong-Zhou; Li, Nan; He, Da-Ren
2007-03-01
In this paper, we present the empirical investigation results on the urban bus transport networks (BTNs) of four major cities in China. In BTN, nodes are bus stops. Two nodes are connected by an edge when the stops are serviced by a common bus route. The empirical results show that the degree distributions of BTNs take exponential function forms. Other two statistical properties of BTNs are also considered, and they are suggested as the distributions of so-called “the number of stops in a bus route” (represented by S) and “the number of bus routes a stop joins” (by R). The distributions of R also show exponential function forms, while the distributions of S follow asymmetric, unimodal functions. To explain these empirical results and attempt to simulate a possible evolution process of BTN, we introduce a model by which the analytic and numerical result obtained agrees well with the empirical facts. Finally, we also discuss some other possible evolution cases, where the degree distribution shows a power law or an interpolation between the power law and the exponential decay.
Hysteresis in suspended sediment to turbidity relations due to changing particle size distributions
Landers, Mark N.; Sturm, Terry W.
2013-01-01
Turbidity (T) is the most ubiquitous of surrogate technologies used to estimate suspended-sediment concentration (SSC). The effects of sediment size on turbidity are well documented; however, effects from changes in particle size distributions (PSD) are rarely evaluated. Hysteresis in relations of SSC-to-turbidity (SSC~T) for single stormflow events was observed and quantified for a data set of 195 concurrent measurements of SSC, turbidity, discharge, velocity, and volumetric PSD collected during five stormflows in 2009–2010 on Yellow River at Gees Mill Road in metropolitan Atlanta, Georgia. Regressions of SSC-normalized turbidity (T/SSC) on concurrently measured PSD percentiles show an inverse, exponential influence of particle size on turbidity that is not constant across the size range of the PSD. The majority of the influence of PSD on T/SSC is from particles of fine-silt and smaller sizes (finer than 16 microns). This study shows that small changes in the often assumed stability of the PSD are significant to SSC~T relations. Changes of only 5 microns in the fine silt and smaller size fractions of suspended sediment PSD can produce hysteresis in the SSC~T rating that can increase error and produce bias. Observed SSC~T hysteresis may be an indicator of changes in sediment properties during stormflows and of potential changes in sediment sources. Trends in the PSD time series indicate that sediment transport is capacity-limited for sand-sized sediment in the channel and supply-limited for fine silt and smaller sediment from the hillslope.
An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle
NASA Astrophysics Data System (ADS)
Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei
2016-08-01
We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
Snow fracture: From micro-cracking to global failure
NASA Astrophysics Data System (ADS)
Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg
2017-04-01
Slab avalanches are caused by a crack forming and propagating in a weak layer within the snow cover, which eventually causes the detachment of the overlying cohesive slab. The gradual damage process leading to the nucleation of the initial failure is still not entirely understood. Therefore, we studied the damage process preceding snow failure by analyzing the acoustic emissions (AE) generated by bond failure or micro-cracking. The AE allow studying the ongoing progressive failure in a non-destructive way. We performed fully load-controlled failure experiments on snow samples presenting a weak layer and recorded the generated AE. The size and frequency of the generated AE increased before failure revealing an acceleration of the damage process with increased size and frequency of damage and/or microscopic cracks. The AE energy was power-law distributed and the exponent (b-value) decreased approaching failure. The waiting time followed an exponential distribution with increasing exponential coefficient λ before failure. The decrease of the b-value and the increase of λ correspond to a change in the event distribution statistics indicating a transition from homogeneously distributed uncorrelated damage producing mostly small AE to localized damage, which cause larger correlated events which leads to brittle failure. We observed brittle failure for the fast experiment and a more ductile behavior for the slow experiments. This rate dependence was reflected also in the AE signature. In the slow experiments the b value and λ were almost constant, and the energy rate increase was moderate indicating that the damage process was in a stable state - suggesting the damage and healing processes to be balanced. On a shorter time scale, however, the AE parameters varied indicating that the damage process was not steady but consisted of a sum of small bursts. We assume that the bursts may have been generated by cascades of correlated micro-cracks caused by localization of stresses at a small scale. The healing process may then have prevented the self-organization of this small scale damage and, therefore, the total failure of the sample.
Size distributions of manure particles released under simulated rainfall.
Pachepsky, Yakov A; Guber, Andrey K; Shelton, Daniel R; McCarty, Gregory W
2009-03-01
Manure and animal waste deposited on cropland and grazing lands serve as a source of microorganisms, some of which may be pathogenic. These microorganisms are released along with particles of dissolved manure during rainfall events. Relatively little if anything is known about the amounts and sizes of manure particles released during rainfall, that subsequently may serve as carriers, abode, and nutritional source for microorganisms. The objective of this work was to obtain and present the first experimental data on sizes of bovine manure particles released to runoff during simulated rainfall and leached through soil during subsequent infiltration. Experiments were conducted using 200 cm long boxes containing turfgrass soil sod; the boxes were designed so that rates of manure dissolution and subsequent infiltration and runoff could be monitored independently. Dairy manure was applied on the upper portion of boxes. Simulated rainfall (ca. 32.4 mm h(-1)) was applied for 90 min on boxes with stands of either live or dead grass. Electrical conductivity, turbidity, and particle size distributions obtained from laser diffractometry were determined in manure runoff and soil leachate samples. Turbidity of leachates and manure runoff samples decreased exponentially. Turbidity of manure runoff samples was on average 20% less than turbidity of soil leachate samples. Turbidity of leachate samples from boxes with dead grass was on average 30% less than from boxes with live grass. Particle size distributions in manure runoff and leachate suspensions remained remarkably stable after 15 min of runoff initiation, although the turbidity continued to decrease. Particles had the median diameter of 3.8 microm, and 90% of particles were between 0.6 and 17.8 microm. The particle size distributions were not affected by the grass status. Because manure particles are known to affect transport and retention of microbial pathogens in soil, more information needs to be collected about the concurrent release of pathogens and manure particles during rainfall events.
Analysis of the Chinese air route network as a complex network
NASA Astrophysics Data System (ADS)
Cai, Kai-Quan; Zhang, Jun; Du, Wen-Bo; Cao, Xian-Bin
2012-02-01
The air route network, which supports all the flight activities of the civil aviation, is the most fundamental infrastructure of air traffic management system. In this paper, we study the Chinese air route network (CARN) within the framework of complex networks. We find that CARN is a geographical network possessing exponential degree distribution, low clustering coefficient, large shortest path length and exponential spatial distance distribution that is obviously different from that of the Chinese airport network (CAN). Besides, via investigating the flight data from 2002 to 2010, we demonstrate that the topology structure of CARN is homogeneous, howbeit the distribution of flight flow on CARN is rather heterogeneous. In addition, the traffic on CARN keeps growing in an exponential form and the increasing speed of west China is remarkably larger than that of east China. Our work will be helpful to better understand Chinese air traffic systems.
Colloquium: Statistical mechanics of money, wealth, and income
NASA Astrophysics Data System (ADS)
Yakovenko, Victor M.; Rosser, J. Barkley, Jr.
2009-10-01
This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential (“thermal”) distribution, whereas a small fraction of the population in the upper class is characterized by the power-law (“superthermal”) distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium.
NASA Astrophysics Data System (ADS)
Tang, Huanfeng; Huang, Zaiyin; Xiao, Ming; Liang, Min; Chen, Liying; Tan, XueCai
2017-09-01
The activities, selectivities, and stabilities of nanoparticles in heterogeneous reactions are size-dependent. In order to investigate the influencing laws of particle size and temperature on kinetic parameters in heterogeneous reactions, cubic nano-Cu2O particles of four different sizes in the range of 40-120 nm have been controllably synthesized. In situ microcalorimetry has been used to attain thermodynamic data on the reaction of Cu2O with aqueous HNO3 and, combined with thermodynamic principles and kinetic transition-state theory, the relevant reaction kinetic parameters have been evaluated. The size dependences of the kinetic parameters are discussed in terms of the established kinetic model and the experimental results. It was found that the reaction rate constants increased with decreasing particle size. Accordingly, the apparent activation energy, pre-exponential factor, activation enthalpy, activation entropy, and activation Gibbs energy decreased with decreasing particle size. The reaction rate constants and activation Gibbs energies increased with increasing temperature. Moreover, the logarithms of the apparent activation energies, pre-exponential factors, and rate constants were found to be linearly related to the reciprocal of particle size, consistent with the kinetic models. The influence of particle size on these reaction kinetic parameters may be explained as follows: the apparent activation energy is affected by the partial molar enthalpy, the pre-exponential factor is affected by the partial molar entropy, and the reaction rate constant is affected by the partial molar Gibbs energy. [Figure not available: see fulltext.
Flow and clogging of a sheep herd passing through a bottleneck.
Garcimartín, A; Pastor, J M; Ferrer, L M; Ramos, J J; Martín-Gómez, C; Zuriguel, I
2015-02-01
We present an experimental study of a flock passing through a narrow door. Video monitoring of daily routines in a farm has enabled us to collect a sizable amount of data. By measuring the time lapse between the passage of consecutive animals, some features of the flow regime can be assessed. A quantitative definition of clogging is demonstrated based on the passage time statistics. These display broad tails, which can be fitted by power laws with a relatively large exponent. On the other hand, the distribution of burst sizes robustly evidences exponential behavior. Finally, borrowing concepts from granular physics and statistical mechanics, we evaluate the effect of increasing the door size and the performance of an obstacle placed in front of it. The success of these techniques opens new possibilities regarding their eventual extension to the management of human crowds.
Real-Time System Verification by Kappa-Induction
NASA Technical Reports Server (NTRS)
Pike, Lee S.
2005-01-01
We report the first formal verification of a reintegration protocol for a safety-critical, fault-tolerant, real-time distributed embedded system. A reintegration protocol increases system survivability by allowing a node that has suffered a fault to regain state consistent with the operational nodes. The protocol is verified in the Symbolic Analysis Laboratory (SAL), where bounded model checking and decision procedures are used to verify infinite-state systems by k-induction. The protocol and its environment are modeled as synchronizing timeout automata. Because k-induction is exponential with respect to k, we optimize the formal model to reduce the size of k. Also, the reintegrator's event-triggered behavior is conservatively modeled as time-triggered behavior to further reduce the size of k and to make it invariant to the number of nodes modeled. A corollary is that a clique avoidance property is satisfied.
Flow and clogging of a sheep herd passing through a bottleneck
NASA Astrophysics Data System (ADS)
Garcimartín, A.; Pastor, J. M.; Ferrer, L. M.; Ramos, J. J.; Martín-Gómez, C.; Zuriguel, I.
2015-02-01
We present an experimental study of a flock passing through a narrow door. Video monitoring of daily routines in a farm has enabled us to collect a sizable amount of data. By measuring the time lapse between the passage of consecutive animals, some features of the flow regime can be assessed. A quantitative definition of clogging is demonstrated based on the passage time statistics. These display broad tails, which can be fitted by power laws with a relatively large exponent. On the other hand, the distribution of burst sizes robustly evidences exponential behavior. Finally, borrowing concepts from granular physics and statistical mechanics, we evaluate the effect of increasing the door size and the performance of an obstacle placed in front of it. The success of these techniques opens new possibilities regarding their eventual extension to the management of human crowds.
Evidence for a scale-limited low-frequency earthquake source process
NASA Astrophysics Data System (ADS)
Chestler, S. R.; Creager, K. C.
2017-04-01
We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
Competition for popularity in bipartite networks.
Díaz, Mariano Beguerisse; Porter, Mason A; Onnela, Jukka-Pekka
2010-12-01
We present a dynamical model for rewiring and attachment in bipartite networks. Edges are placed between nodes that belong to catalogs that can either be fixed in size or growing in size. The model is motivated by an empirical study of data from the video rental service Netflix, which invites its users to give ratings to the videos available in its catalog. We find that the distribution of the number of ratings given by users and that of the number of ratings received by videos both follow a power law with an exponential cutoff. We also examine the activity patterns of Netflix users and find bursts of intense video-rating activity followed by long periods of inactivity. We derive ordinary differential equations to model the acquisition of edges by the nodes over time and obtain the corresponding time-dependent degree distributions. We then compare our results with the Netflix data and find good agreement. We conclude with a discussion of how catalog models can be used to study systems in which agents are forced to choose, rate, or prioritize their interactions from a large set of options. © 2010 American Institute of Physics.
Competition for popularity in bipartite networks
NASA Astrophysics Data System (ADS)
Beguerisse Díaz, Mariano; Porter, Mason A.; Onnela, Jukka-Pekka
2010-12-01
We present a dynamical model for rewiring and attachment in bipartite networks. Edges are placed between nodes that belong to catalogs that can either be fixed in size or growing in size. The model is motivated by an empirical study of data from the video rental service Netflix, which invites its users to give ratings to the videos available in its catalog. We find that the distribution of the number of ratings given by users and that of the number of ratings received by videos both follow a power law with an exponential cutoff. We also examine the activity patterns of Netflix users and find bursts of intense video-rating activity followed by long periods of inactivity. We derive ordinary differential equations to model the acquisition of edges by the nodes over time and obtain the corresponding time-dependent degree distributions. We then compare our results with the Netflix data and find good agreement. We conclude with a discussion of how catalog models can be used to study systems in which agents are forced to choose, rate, or prioritize their interactions from a large set of options.
NASA Astrophysics Data System (ADS)
Golombek, M. P.; Haldemann, A. F. C.; Forsberg-Taylor, N. K.; DiMaggio, E. N.; Schroeder, R. D.; Jakosky, B. M.; Mellon, M. T.; Matijevic, J. R.
2003-10-01
The cumulative fractional area covered by rocks versus diameter measured at the Pathfinder site was predicted by a rock distribution model that follows simple exponential functions that approach the total measured rock abundance (19%), with a steep decrease in rocks with increasing diameter. The distribution of rocks >1.5 m diameter visible in rare boulder fields also follows this steep decrease with increasing diameter. The effective thermal inertia of rock populations calculated from a simple empirical model of the effective inertia of rocks versus diameter shows that most natural rock populations have cumulative effective thermal inertias of 1700-2100 J m-2 s-0.5 K-1 and are consistent with the model rock distributions applied to total rock abundance estimates. The Mars Exploration Rover (MER) airbags have been successfully tested against extreme rock distributions with a higher percentage of potentially hazardous triangular buried rocks than observed at the Pathfinder and Viking landing sites. The probability of the lander impacting a >1 m diameter rock in the first 2 bounces is <3% and <5% for the Meridiani and Gusev landing sites, respectively, and is <0.14% and <0.03% for rocks >1.5 m and >2 m diameter, respectively. Finally, the model rock size-frequency distributions indicate that rocks >0.1 m and >0.3 m in diameter, large enough to place contact sensor instruments against and abrade, respectively, should be plentiful within a single sol's drive at the Meridiani and Gusev landing sites.
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitas, R. J.; Shimakawa, K.; Department of Electrical and Electronic Engineering, Gifu University, Gifu 501-1193
The article discusses the dynamics of photoinduced defect creations (PDC) in amorphous chalcogenides, which is described by the stretched exponential function (SEF), while the well known photodarkening (PD) and photoinduced volume expansion (PVE) are governed only by the exponential function. It is shown that the exponential distribution of the thermal activation barrier produces the SEF in PDC, suggesting that thermal energy, as well as photon energy, is incorporated in PDC mechanisms. The differences in dynamics among three major photoinduced effects (PD, PVE, and PDC) in amorphous chalcogenides are now well understood.
Sznajd Opinion Dynamics with Global and Local Neighborhood
NASA Astrophysics Data System (ADS)
Schulze, Christian
In this modification of the Sznajd consensus model on the square lattice, two people of arbitrary distance who agree in their opinions convince their nearest neighbors of this opinion. Similarly to the mean field theory of Slanina and Lavicka, the times needed to reach consensus are distributed exponentially and are quite small. The width of the phase transition vanishes reciprocally to the linear lattice dimension. Advertising has effects independent of the system size. For more than two opinions, three opinions reach a consensus in roughly half of the samples, and four only rarely and only for small lattices. Up to 109 agents were simulated.
Localization through surface folding in solid foams under compression.
Reis, P M; Corson, F; Boudaoud, A; Roman, B
2009-07-24
We report a combined experimental and theoretical study of the compression of a solid foam coated with a thin elastic film. Past a critical compression threshold, a pattern of localized folds emerges with a characteristic size that is imposed by an instability of the thin surface film. We perform optical surface measurements of the statistical properties of these localization zones and find that they are characterized by robust exponential tails in the strain distributions. Following a hybrid continuum and statistical approach, we develop a theory that accurately describes the nucleation and length scale of these structures and predicts the characteristic strains associated with the localized regions.
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
Income inequality in Romania: The exponential-Pareto distribution
NASA Astrophysics Data System (ADS)
Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan
2017-03-01
We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.
Radio pulsar glitches as a state-dependent Poisson process
NASA Astrophysics Data System (ADS)
Fulgenzi, W.; Melatos, A.; Hughes, B. D.
2017-10-01
Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar.
NASA Astrophysics Data System (ADS)
Nadarajah, Saralees; Kotz, Samuel
2007-04-01
Various q-type distributions have appeared in the physics literature in the recent years, see e.g. L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57. It is pointed out in the paper that many of these are the same as or particular cases of what has been known in the statistics literature. Several of these statistical distributions are discussed and references provided. We feel that this paper could be of assistance for modeling problems of the type considered by L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57 and others.
Characterization of plasma current quench during disruptions at HL-2A
NASA Astrophysics Data System (ADS)
Zhu, Jinxia; Zhang, Yipo; Dong, Yunbo; HL-2A Team
2017-05-01
The most essential assumptions of physics for the evaluation of electromagnetic forces on the plasma-facing components due to a disruption-induced eddy current are characteristics of plasma current quenches including the current quench rate or its waveforms. The characteristics of plasma current quenches at HL-2A have been analyzed during spontaneous disruptions. Both linear decay and exponential decay are found in the disruptions with the fastest current quenches. However, there are two stages of current quench in the slow current quench case. The first stage with an exponential decay and the second stage followed by a rapid linear decay. The faster current quench rate corresponds to the faster movement of plasma displacement. The parameter regimes on the current quench time and the current quench rates have been obtained from disruption statistics at HL-2A. There exists no remarkable difference for distributions obtained between the limiter and the divertor configuration. This data from HL-2A provides basic data of the derivation of design criteria for a large-sized machine during the current decay phase of the disruptions.
NASA Astrophysics Data System (ADS)
Sala, Adrien; Shoaib, Muhammad; Anufrieva, Olga; Mutharasu, Gnanavel; Jahan Hoque, Rawnak; Yli-Harja, Olli; Kandhavelu, Meenakshisundaram
2015-05-01
In E. coli, promoter closed and open complexes are key steps in transcription initiation, where magnesium-dependent RNA polymerase catalyzes RNA synthesis. However, the exact mechanism of initiation remains to be fully elucidated. Here, using single mRNA detection and dual reporter studies, we show that increased intracellular magnesium concentration affects Plac initiation complex formation resulting in a highly dynamic process over the cell growth phases. Mg2+ regulates transcription transition, which modulates bimodality of mRNA distribution in the exponential phase. We reveal that Mg2+ regulates the size and frequency of the mRNA burst by changing the open complex duration. Moreover, increasing magnesium concentration leads to higher intrinsic and extrinsic noise in the exponential phase. RNAP-Mg2+ interaction simulation reveals critical movements creating a shorter contact distance between aspartic acid residues and Nucleotide Triphosphate residues and increasing electrostatic charges in the active site. Our findings provide unique biophysical insights into the balanced mechanism of genetic determinants and magnesium ion in transcription initiation regulation during cell growth.
Deviation of Zipf's and Heaps' Laws in Human Languages with Limited Dictionary Sizes
Lü, Linyuan; Zhang, Zi-Ke; Zhou, Tao
2013-01-01
Zipf's law on word frequency and Heaps' law on the growth of distinct words are observed in Indo-European language family, but it does not hold for languages like Chinese, Japanese and Korean. These languages consist of characters, and are of very limited dictionary sizes. Extensive experiments show that: (i) The character frequency distribution follows a power law with exponent close to one, at which the corresponding Zipf's exponent diverges. Indeed, the character frequency decays exponentially in the Zipf's plot. (ii) The number of distinct characters grows with the text length in three stages: It grows linearly in the beginning, then turns to a logarithmical form, and eventually saturates. A theoretical model for writing process is proposed, which embodies the rich-get-richer mechanism and the effects of limited dictionary size. Experiments, simulations and analytical solutions agree well with each other. This work refines the understanding about Zipf's and Heaps' laws in human language systems. PMID:23378896
NASA Astrophysics Data System (ADS)
Mukherjee, Siddhartha; Goswami, Prakash; Dhar, Jayabrata; Dasgupta, Sunando; Chakraborty, Suman
2017-07-01
We report a study on the ion-size dependent electroosmosis of viscoelastic fluids in microfluidic channels with interfacial slip. Here, we derive an analytical solution for the potential distribution in a parallel plate microchannel, where the effects of finite sized ionic species are taken into account by invoking the free energy formalism. Following this, a purely electroosmotic flow of a simplified Phan-Thien-Tanner (sPTT) fluid is considered. For the sPTT model, linear, quadratic, and exponential kernels are chosen for the stress coefficient function describing its viscoelastic nature across various ranges of Deborah number. The theoretical framework presented in our analysis has been successfully compared with experimental results available in the literature. We believe that the implications of the considered effects on the net volumetric throughput will not only provide a deeper theoretical insight to interpret the electrokinetic data in the presence of ionic species but also serve as a fundamental design tool for novel electrokinetically driven lab-on-a-chip biofluidic devices.
Nayeb Yazdi, Mohammad; Delavarrafiee, Maryam; Arhami, Mohammad
2015-12-15
A field sampling campaign was implemented to evaluate the variation in air pollutants levels near a highway in Tehran, Iran (Hemmat highway). The field measurements were used to estimate road link-based emission factors for average vehicle fleet. These factors were compared with results of an in tunnel measurement campaign (in Resalat tunnel). Roadside and in-tunnel measurements of carbon monoxide (CO) and size-fractionated particulate matter (PM) were conducted during the field campaign. The concentration gradient diagrams showed exponential decay, which represented a substantial decay, more than 50-80%, in air pollutants level in a distance between 100 and 150meters (m) of the highway. The changes in particle size distribution by distancing from highway were also captured and evaluated. The results showed particle size distribution shifted to larger size particles by distancing from highway. The empirical emission factors were obtained by using the roadside and in tunnel measurements with a hypothetical box model, floating machine model, CALINE4, CT-EMFAC or COPERT. Average CO emission factors were estimated to be in a range of 4 to 12g/km, and those of PM10 were 0.1 to 0.2g/km, depending on traffic conditions. Variations of these emission factors under real working condition with speeds were determined. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Feng; Davis, Anthony B.; Diner, David J.
2016-11-01
A Markov chain formalism is developed for computing the transport of polarized radiation according to Generalized Radiative Transfer (GRT) theory, which was developed recently to account for unresolved random fluctuations of scattering particle density and can also be applied to unresolved spectral variability of gaseous absorption as an improvement over the standard correlated-k method. Using Gamma distribution to describe the probability density function of the extinction or absorption coefficient, a shape parameter a that quantifies the variability is introduced, defined as the mean extinction or absorption coefficient squared divided by its variance. It controls the decay rate of a power-law transmission that replaces the usual exponential Beer-Lambert-Bouguer law. Exponential transmission, hence classic RT, is recovered when a→∞. The new approach is verified to high accuracy against numerical benchmark results obtained with a custom Monte Carlo method. For a<∞, angular reciprocity is violated to a degree that increases with the spatial variability, as observed for finite portions of real-world cloudy scenes. While the degree of linear polarization in liquid water cloudbows, supernumerary bows, and glories is affected by spatial heterogeneity, the positions in scattering angle of these features are relatively unchanged. As a result, a single-scattering model based on the assumption of subpixel homogeneity can still be used to derive droplet size distributions from polarimetric measurements of extended stratocumulus clouds.
Small-Scale, Local Area, and Transitional Millimeter Wave Propagation for 5G Communications
NASA Astrophysics Data System (ADS)
Rappaport, Theodore S.; MacCartney, George R.; Sun, Shu; Yan, Hangsong; Deng, Sijia
2017-12-01
This paper studies radio propagation mechanisms that impact handoffs, air interface design, beam steering, and MIMO for 5G mobile communication systems. Knife edge diffraction (KED) and a creeping wave linear model are shown to predict diffraction loss around typical building objects from 10 to 26 GHz, and human blockage measurements at 73 GHz are shown to fit a double knife-edge diffraction (DKED) model which incorporates antenna gains. Small-scale spatial fading of millimeter wave received signal voltage amplitude is generally Ricean-distributed for both omnidirectional and directional receive antenna patterns under both line-of-sight (LOS) and non-line-of-sight (NLOS) conditions in most cases, although the log-normal distribution fits measured data better for the omnidirectional receive antenna pattern in the NLOS environment. Small-scale spatial autocorrelations of received voltage amplitudes are shown to fit sinusoidal exponential and exponential functions for LOS and NLOS environments, respectively, with small decorrelation distances of 0.27 cm to 13.6 cm (smaller than the size of a handset) that are favorable for spatial multiplexing. Local area measurements using cluster and route scenarios show how the received signal changes as the mobile moves and transitions from LOS to NLOS locations, with reasonably stationary signal levels within clusters. Wideband mmWave power levels are shown to fade from 0.4 dB/ms to 40 dB/s, depending on travel speed and surroundings.
Patel, Mainak; Rangan, Aaditya
2017-08-07
Infant rats randomly cycle between the sleeping and waking states, which are tightly correlated with the activity of mutually inhibitory brainstem sleep and wake populations. Bouts of sleep and wakefulness are random; from P2-P10, sleep and wake bout lengths are exponentially distributed with increasing means, while during P10-P21, the sleep bout distribution remains exponential while the distribution of wake bouts gradually transforms to power law. The locus coeruleus (LC), via an undeciphered interaction with sleep and wake populations, has been shown experimentally to be responsible for the exponential to power law transition. Concurrently during P10-P21, the LC undergoes striking physiological changes - the LC exhibits strong global 0.3 Hz oscillations up to P10, but the oscillation frequency gradually rises and synchrony diminishes from P10-P21, with oscillations and synchrony vanishing at P21 and beyond. In this work, we construct a biologically plausible Wilson Cowan-style model consisting of the LC along with sleep and wake populations. We show that external noise and strong reciprocal inhibition can lead to switching between sleep and wake populations and exponentially distributed sleep and wake bout durations as during P2-P10, with the parameters of inhibition between the sleep and wake populations controlling mean bout lengths. Furthermore, we show that the changing physiology of the LC from P10-P21, coupled with reciprocal excitation between the LC and wake population, can explain the shift from exponential to power law of the wake bout distribution. To our knowledge, this is the first study that proposes a plausible biological mechanism, which incorporates the known changing physiology of the LC, for tying the developing sleep-wake circuit and its interaction with the LC to the transformation of sleep and wake bout dynamics from P2-P21. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical mechanics of money and income
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian; Yakovenko, Victor
2001-03-01
Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.
NASA Astrophysics Data System (ADS)
Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.
The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.
Redshift data and statistical inference
NASA Technical Reports Server (NTRS)
Newman, William I.; Haynes, Martha P.; Terzian, Yervant
1994-01-01
Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.
Human mobility in space from three modes of public transportation
NASA Astrophysics Data System (ADS)
Jiang, Shixiong; Guan, Wei; Zhang, Wenyi; Chen, Xu; Yang, Liu
2017-10-01
The human mobility patterns have drew much attention from researchers for decades, considering about its importance for urban planning and traffic management. In this study, the taxi GPS trajectories, smart card transaction data of subway and bus from Beijing are utilized to model human mobility in space. The original datasets are cleaned and processed to attain the displacement of each trip according to the origin and destination locations. Then, the Akaike information criterion is adopted to screen out the best fitting distribution for each mode from candidate ones. The results indicate that displacements of taxi trips follow the exponential distribution. Besides, the exponential distribution also fits displacements of bus trips well. However, their exponents are significantly different. Displacements of subway trips show great specialties and can be well fitted by the gamma distribution. It is obvious that human mobility of each mode is different. To explore the overall human mobility, the three datasets are mixed up to form a fusion dataset according to the annual ridership proportions. Finally, the fusion displacements follow the power-law distribution with an exponential cutoff. It is innovative to combine different transportation modes to model human mobility in the city.
Zafra, C A; Temprano, J; Tejero, I
2011-07-01
The heavy metal pollution caused by road run-off water constitutes a problem in urban areas. The metallic load associated with road sediment must be determined in order to study its impact in drainage systems and receiving waters, and to perfect the design of prevention systems. This paper presents data regarding the sediment collected on road surfaces in the city of Torrelavega (northern Spain) during a period of 65 days (132 samples). Two sample types were collected: vacuum-dried samples and those swept up following vacuuming. The sediment loading (g m(-2)), particle size distribution (63-2800 microm) and heavy metal concentrations were determined. The data showed that the concentration of heavy metals tends to increase with the reduction in the particle diameter (exponential tendency). The concentrations ofPb, Zn, Cu, Cr, Ni, Cd, Fe, Mn and Co in the size fraction <63 microm were 350, 630, 124, 57, 56, 38, 3231, 374 and 51 mg kg(-1), respectively (average traffic density: 3800 vehicles day(-1)). By increasing the residence time of the sediment, the concentration increases, whereas the ratio of the concentration between the different size fractions decreases. The concentration across the road diminishes when the distance between the roadway and the sampling siteincreases; when the distance increases, the ratio between size fractions for heavy metal concentrations increases. Finally, the main sources of heavy metals are the particles detached by braking (brake pads) and tyre wear (rubber), and are associated with particle sizes <125 microm.
NASA Astrophysics Data System (ADS)
Gao, Weihong; Rigout, Muriel; Owens, Huw
2016-12-01
In this work, the Stöber process was applied to produce uniform silica nanoparticles (SNPs) in the meso-scale size range. The novel aspect of this work was to control the produced silica particle size by only varying the volume of the solvent ethanol used, whilst fixing the other reaction conditions. Using this one-step Stöber-based solvent varying (SV) method, seven batches of SNPs with target diameters ranging from 70 to 400 nm were repeatedly reproduced, and the size distribution in terms of the polydispersity index (PDI) was well maintained (within 0.1). An exponential equation was used to fit the relationship between the particle diameter and ethanol volume. This equation allows the prediction of the amount of ethanol required in order to produce particles of any target diameter within this size range. In addition, it was found that the reaction was completed in approximately 2 h for all batches regardless of the volume of ethanol. Structurally coloured artificial opal photonic crystals (PCs) were fabricated from the prepared SNPs by self-assembly under gravity sedimentation.
Voter model with non-Poissonian interevent intervals
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Masuda, Naoki
2011-09-01
Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.
Line transect estimation of population size: the exponential case with grouped data
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1979-01-01
Gates, Marshall, and Olson (1968) investigated the line transect method of estimating grouse population densities in the case where sighting probabilities are exponential. This work is followed by a simulation study in Gates (1969). A general overview of line transect analysis is presented by Burnham and Anderson (1976). These articles all deal with the ungrouped data case. In the present article, an analysis of line transect data is formulated under the Gates framework of exponential sighting probabilities and in the context of grouped data.
Non-Poissonian Distribution of Tsunami Waiting Times
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2007-12-01
Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.
DOE R&D Accomplishments Database
Chambers, E. E.; Hofstadter, R.
1956-04-01
The structure and size of the proton have been studied by means of the methods of high-energy electron scattering. The elastic scattering of electrons from protons in polyethylene has been investigated at the following energies in the laboratory system: 200, 300, 400, 500, 550 Mev. The range of laboratory angles examined has been 30 degrees to 135 degrees. At the largest angles and the highest energy, the cross section for scattering shows a deviation below that expected from a point proton by a factor of about nine. The magnitude and variation with angle of the deviations determine a structure factor for the proton, and thereby determine the size and shape of the charge and magnetic-moment distributions within the proton. An interpretation, consistent at all energies and angles and agreeing with earlier results from this laboratory, fixes the rms radius at 0.77 {plus or minus} 0.10 x 10{sup -13} cm for each of the charge and moment distributions. The shape of the density function is not far from a Gaussian with rms radius 0.70 x 10{sup -13} cm or an exponential with rms radius 0.80 x 10 {sup -13} cm. An equivalent interpretation of the experiments would ascribe the apparent size to a breakdown of the Coulomb law and the conventional theory of electromagnetism.
Periodic bidirectional associative memory neural networks with distributed delays
NASA Astrophysics Data System (ADS)
Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde
2006-05-01
Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.
Zhao, Kaihong
2018-12-01
In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
A mathematical model for evolution and SETI.
Maccone, Claudio
2011-12-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor f(l) in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor f(l) is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
Modelling Evolution and SETI Mathematically
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2012-05-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factor increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions constrained between the time axis and the exponential growth curve. Finally, since each lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
A Mathematical Model for Evolution and SETI
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-12-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Microfracture spacing distributions and the evolution of fracture patterns in sandstones
NASA Astrophysics Data System (ADS)
Hooker, J. N.; Laubach, S. E.; Marrett, R.
2018-03-01
Natural fracture patterns in sandstone were sampled using scanning electron microscope-based cathodoluminescence (SEM-CL) imaging. All fractures are opening-mode and are fully or partially sealed by quartz cement. Most sampled fractures are too small to be height-restricted by sedimentary layers. At very low strains (<∼0.001), fracture spatial distributions are indistinguishable from random, whereas at higher strains, fractures are generally statistically clustered. All 12 large (N > 100) datasets show spacings that are best fit by log-normal size distributions, compared to exponential, power law, or normal distributions. The clustering of fractures suggests that the locations of natural factures are not determined by a random process. To investigate natural fracture localization, we reconstructed the opening history of a cluster of fractures within the Huizachal Group in northeastern Mexico, using fluid inclusions from synkinematic cements and thermal-history constraints. The largest fracture, which is the only fracture in the cluster visible to the naked eye, among 101 present, opened relatively late in the sequence. This result suggests that the growth of sets of fractures is a self-organized process, in which small, initially isolated fractures grow and progressively interact, with preferential growth of a subset of fractures developing at the expense of growth of the rest. Size-dependent sealing of fractures within sets suggests that synkinematic cementation may contribute to fracture clustering.
Physical properties of the ionized gas and brightness distribution in NGC4736
NASA Astrophysics Data System (ADS)
Rodrigues, I.; Dottori, H.; Cepa, J.; Vilchez, J.
1998-03-01
In this work we study the galaxy NGC4736, using narrow band interference filters imaging centered at the emission lines {Oii} {3727+3729}, Hβ, {Oiii} {5007}, Hα, {Sii} {6716+6730} and {Siii} {9070} and nearby continua. We have obtained sizes, positions, emission line absolute fluxes, and continua intensities for 90 Hii regions, mainly distributed in a ring-like structure of 3.2kpc in diameter. The Hα luminosities are in the range 37.3 <= log L_Hα <= 39.4 ergs(-1) . The Hii regions size distribution presents a characteristic diameter D_0 = 115pc and verifies the relation log (L_Hα ) ~D(3) . The temperature of the ionizing sources and the metallicity of the Hii regions are respectively in the ranges 3.410(4) <~T_⋆ <~ 4.010(4) K and 8.5 <~12 + log (O/H) <~9.3. The masses of the ionizing clusters are in the range 510(3) <~M_T/M_sun <~210(5) . The continua radial surface brightness distribution is better fitted by the superposition of a de Vaucouleurs', a thin and a thick exponential disk laws. The monochromatic colors show that outside the star forming ring the disk presents a younger stellar population than inside it. Tables 3 and 4 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html
NASA Astrophysics Data System (ADS)
Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu
2005-10-01
A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.
Optical solver of combinatorial problems: nanotechnological approach.
Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor
2013-09-01
We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.
Computational modeling of magnetic nanoparticle targeting to stent surface under high gradient field
Wang, Shunqiang; Zhou, Yihua; Tan, Jifu; Xu, Jiang; Yang, Jie; Liu, Yaling
2014-01-01
A multi-physics model was developed to study the delivery of magnetic nanoparticles (MNPs) to the stent-implanted region under an external magnetic field. The model is firstly validated by experimental work in literature. Then, effects of external magnetic field strength, magnetic particle size, and flow velocity on MNPs’ targeting and binding have been analyzed through a parametric study. Two new dimensionless numbers were introduced to characterize relative effects of Brownian motion (BM), magnetic force induced particle motion, and convective blood flow on MNPs motion. It was found that larger magnetic field strength, bigger MNP size, and slower flow velocity increase the capture efficiency of MNPs. The distribution of captured MNPs on the vessel along axial and azimuthal directions was also discussed. Results showed that the MNPs density decreased exponentially along axial direction after one-dose injection while it was uniform along azimuthal direction in the whole stented region (averaged over all sections). For the beginning section of the stented region, the density ratio distribution of captured MNPs along azimuthal direction is center-symmetrical, corresponding to the center-symmetrical distribution of magnetic force in that section. Two different generation mechanisms are revealed to form four main attraction regions. These results could serve as guidelines to design a better magnetic drug delivery system. PMID:24653546
Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.
2007-01-01
Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.
Universality in survivor distributions: Characterizing the winners of competitive dynamics
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2015-11-01
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.
Heterogeneous characters modeling of instant message services users’ online behavior
Fang, Yajun; Horn, Berthold
2018-01-01
Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327
Heterogeneous characters modeling of instant message services users' online behavior.
Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E
2018-01-01
Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.
Stochastic Model of Vesicular Sorting in Cellular Organelles
NASA Astrophysics Data System (ADS)
Vagne, Quentin; Sens, Pierre
2018-02-01
The proper sorting of membrane components by regulated exchange between cellular organelles is crucial to intracellular organization. This process relies on the budding and fusion of transport vesicles, and should be strongly influenced by stochastic fluctuations, considering the relatively small size of many organelles. We identify the perfect sorting of two membrane components initially mixed in a single compartment as a first passage process, and we show that the mean sorting time exhibits two distinct regimes as a function of the ratio of vesicle fusion to budding rates. Low ratio values lead to fast sorting but result in a broad size distribution of sorted compartments dominated by small entities. High ratio values result in two well-defined sorted compartments but sorting is exponentially slow. Our results suggest an optimal balance between vesicle budding and fusion for the rapid and efficient sorting of membrane components and highlight the importance of stochastic effects for the steady-state organization of intracellular compartments.
Coevolutionary dynamics with clustering behaviors on cyclic competition
NASA Astrophysics Data System (ADS)
Dong, Linrong; Yang, Guangcan
2012-05-01
We propose a dynamic model for describing clustering behaviors on a cyclic game, in which the same species form a cluster to compete. The rates of consuming the prey depend not only on the individual competing ability v, but also on the two interacting cluster’s sizes. The fragmentation and coagulation rates of the clusters are related to the cohesive strength among the individuals. A new parameter u is introduced to indicate the uniting degree. We find that the probability distribution of the clustering sizes is almost a power law in a large regime specified by the two parameters, which reflects the scale-free behavior in complex systems. In addition, the exponential magnitudes are mostly in the range of real social systems. Our simulation shows that clustering promotes biodiversity. At steady state, the amounts about the three species evolve tempestuously with asymmetric period; the aggregations about big size’s clusters to compete are obvious and on-off intermittence.
Weak signal transmission in complex networks and its application in detecting connectivity.
Liang, Xiaoming; Liu, Zonghua; Li, Baowen
2009-10-01
We present a network model of coupled oscillators to study how a weak signal is transmitted in complex networks. Through both theoretical analysis and numerical simulations, we find that the response of other nodes to the weak signal decays exponentially with their topological distance to the signal source and the coupling strength between two neighboring nodes can be figured out by the responses. This finding can be conveniently used to detect the topology of unknown network, such as the degree distribution, clustering coefficient and community structure, etc., by repeatedly choosing different nodes as the signal source. Through four typical networks, i.e., the regular one dimensional, small world, random, and scale-free networks, we show that the features of network can be approximately given by investigating many fewer nodes than the network size, thus our approach to detect the topology of unknown network may be efficient in practical situations with large network size.
A self-adapting herding model: The agent judge-abilities influence the dynamic behaviors
NASA Astrophysics Data System (ADS)
Dong, Linrong
2008-10-01
We propose a self-adapting herding model, in which the financial markets consist of agent clusters with different sizes and market desires. The ratio of successful exchange and merger depends on the volatility of the market and the market desires of the agent clusters. The desires are assigned in term of the wealth of the agent clusters when they merge. After an exchange, the beneficial cluster’s desire keeps on the same, the losing one’s desire is altered which is correlative with the agent judge-ability. A parameter R is given to all agents to denote the judge-ability. The numerical calculation shows that the dynamic behaviors of the market are influenced distinctly by R, which includes the exponential magnitudes of the probability distribution of sizes of the agent clusters and the volatility autocorrelation of the returns, the intensity and frequency of the volatility.
NASA Astrophysics Data System (ADS)
Zhang, Xufang; Okamoto, Dai; Hatakeyama, Tetsuo; Sometani, Mitsuru; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi
2018-06-01
The impact of oxide thickness on the density distribution of near-interface traps (NITs) in SiO2/4H-SiC structure was investigated. We used the distributed circuit model that had successfully explained the frequency-dependent characteristics of both capacitance and conductance under strong accumulation conditions for SiO2/4H-SiC MOS capacitors with thick oxides by assuming an exponentially decaying distribution of NITs. In this work, it was found that the exponentially decaying distribution is the most plausible approximation of the true NIT distribution because it successfully explained the frequency dependences of capacitance and conductance under strong accumulation conditions for various oxide thicknesses. The thickness dependence of the NIT density distribution was also characterized. It was found that the NIT density increases with increasing oxide thickness, and a possible physical reason was discussed.
The Coherent Interlayer Resistance of a Single, Misoriented Interface between Two Graphite Stacks
NASA Astrophysics Data System (ADS)
Lake, Roger K.; Habib, K. M. Masum; Sylvia, Somaia; Ge, Supeng; Neupane, Mahesh
2014-03-01
The coherent, interlayer resistance of a misoriented, rotated interface between two stacks of AB graphite is determined for a variety of misorientation angles ranging from 0° to 27 .29° . The quantum-resistance of the ideal AB stack is on the order of 1 to 10 m Ωμm2 depending on the Fermi energy. For small rotation angles <= 7 .34° , the coherent interlayer resistance exponentially approaches the ideal quantum resistance at energies away from the charge neutrality point. Over a range of intermediate angles, the resistance increases exponentially with primitive cell size for minimum size cells. A change of misorientation angle by one degree can increase the primitive cell size by three orders of magnitude. These large cell sizes may not follow the exponential trend of the minimal cells especially at energies a few hundred meV away from the charge neutrality point. At such energies, their coherent interlayer resistance is likely to coincide with that of a nearby rotation angle with a much smaller primitive cell. The energy dependence of the interlayer transmission is described and analyzed. This work was supported in part by FAME, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA.
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
Ferrario, Mariana I; Guerrero, Sandra N
The purpose of this study was to analyze the response of different initial contamination levels of Alicyclobacillus acidoterrestris ATCC 49025 spores in apple juice as affected by pulsed light treatment (PL, batch mode, xenon lamp, 3pulses/s, 0-71.6J/cm 2 ). Biphasic and Weibull frequency distribution models were used to characterize the relationship between inoculum size and treatment time with the reductions achieved after PL exposure. Additionally, a second order polynomial model was computed to relate required PL processing time to inoculum size and requested log reductions. PL treatment caused up to 3.0-3.5 log reductions, depending on the initial inoculum size. Inactivation curves corresponding to PL-treated samples were adequately characterized by both Weibull and biphasic models (R adj 2 94-96%), and revealed that lower initial inoculum sizes were associated with higher inactivation rates. According to the polynomial model, the predicted time for PL treatment increased exponentially with inoculum size. Copyright © 2017 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Dong-Hai; Chen, Yan-Ling; Wang, Guo-Rong; Li, Wang-Dong; Wang, Qing; Yao, Ji-Jie; Zhou, Jian-Guo; Li, Rong; Li, Jun-Sheng; Li, Hui-Ling
2015-01-01
The forward-backward multiplicity and correlations of a target evaporated fragment (black track particle) and target recoiled proton (grey track particle) emitted from 150 A MeV 4He, 290 A MeV 12C, 400 A MeV 12C, 400 A MeV 20Ne and 500 A MeV 56Fe induced different types of nuclear emulsion target interactions are investigated. It is found that the forward and backward averaged multiplicity of a grey, black and heavily ionized track particle increases with the increase of the target size. The averaged multiplicity of a forward black track particle, backward black track particle, and backward grey track particle do not depend on the projectile size and energy, but the averaged multiplicity of a forward grey track particle increases with an increase of projectile size and energy. The backward grey track particle multiplicity distribution follows an exponential decay law and the decay constant decreases with an increase of target size. The backward-forward multiplicity correlations follow linear law which is independent of the projectile size and energy, and the saturation effect is observed in some heavy target data sets.
NASA Astrophysics Data System (ADS)
Starn, J. J.; Belitz, K.; Carlson, C.
2017-12-01
Groundwater residence-time distributions (RTDs) are critical for assessing susceptibility of water resources to contamination. This novel approach for estimating regional RTDs was to first simulate groundwater flow using existing regional digital data sets in 13 intermediate size watersheds (each an average of 7,000 square kilometers) that are representative of a wide range of glacial systems. RTDs were simulated with particle tracking. We refer to these models as "general models" because they are based on regional, as opposed to site-specific, digital data. Parametric RTDs were created from particle RTDs by fitting 1- and 2-component Weibull, gamma, and inverse Gaussian distributions, thus reducing a large number of particle travel times to 3 to 7 parameters (shape, location, and scale for each component plus a mixing fraction) for each modeled area. The scale parameter of these distributions is related to the mean exponential age; the shape parameter controls departure from the ideal exponential distribution and is partly a function of interaction with bedrock and with drainage density. Given the flexible shape and mathematical similarity of these distributions, any of them are potentially a good fit to particle RTDs. The 1-component gamma distribution provided a good fit to basin-wide particle RTDs. RTDs at monitoring wells and streams often have more complicated shapes than basin-wide RTDs, caused in part by heterogeneity in the model, and generally require 2-component distributions. A machine learning model was trained on the RTD parameters using features derived from regionally available watershed characteristics such as recharge rate, material thickness, and stream density. RTDs appeared to vary systematically across the landscape in relation to watershed features. This relation was used to produce maps of useful metrics with respect to risk-based thresholds, such as the time to first exceedance, time to maximum concentration, time above the threshold (exposure time), and the time until last exceedance; thus, the parameters of groundwater residence time are measures of the intrinsic susceptibility of groundwater to contamination.
Scaling behavior of sleep-wake transitions across species
NASA Astrophysics Data System (ADS)
Lo, Chung-Chuan; Chou, Thomas; Ivanov, Plamen Ch.; Penzel, Thomas; Mochizuki, Takatoshi; Scammell, Thomas; Saper, Clifford B.; Stanley, H. Eugene
2003-03-01
Uncovering the mechanisms controlling sleep is a fascinating scientific challenge. It can be viewed as transitions of states of a very complex system, the brain. We study the time dynamics of short awakenings during sleep for three species: humans, rats and mice. We find, for all three species, that wake durations follow a power-law distribution, and sleep durations follow exponential distributions. Surprisingly, all three species have the same power-law exponent for the distribution of wake durations, but the exponential time scale of the distributions of sleep durations varies across species. We suggest that the dynamics of short awakenings are related to species-independent fluctuations of the system, while the dynamics of sleep is related to system-dependent mechanisms which change with species.
Counting Raindrops and the Distribution of Intervals Between Them.
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.
2017-12-01
Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
Complex Dynamics in Information Sharing Networks
NASA Astrophysics Data System (ADS)
Cronin, Bruce
This study examines the roll-out of an electronic knowledge base in a medium-sized professional services firm over a six year period. The efficiency of such implementation is a key business problem in IT systems of this type. Data from usage logs provides the basis for analysis of the dynamic evolution of social networks around the depository during this time. The adoption pattern follows an "s-curve" and usage exhibits something of a power law distribution, both attributable to network effects, and network position is associated with organisational performance on a number of indicators. But periodicity in usage is evident and the usage distribution displays an exponential cut-off. Further analysis provides some evidence of mathematical complexity in the periodicity. Some implications of complex patterns in social network data for research and management are discussed. The study provides a case study demonstrating the utility of the broad methodological approach.
Individuality and universality in the growth-division laws of single E. coli cells
NASA Astrophysics Data System (ADS)
Kennard, Andrew S.; Osella, Matteo; Javer, Avelino; Grilli, Jacopo; Nghe, Philippe; Tans, Sander J.; Cicuta, Pietro; Cosentino Lagomarsino, Marco
2016-01-01
The mean size of exponentially dividing Escherichia coli cells in different nutrient conditions is known to depend on the mean growth rate only. However, the joint fluctuations relating cell size, doubling time, and individual growth rate are only starting to be characterized. Recent studies in bacteria reported a universal trend where the spread in both size and doubling times is a linear function of the population means of these variables. Here we combine experiments and theory and use scaling concepts to elucidate the constraints posed by the second observation on the division control mechanism and on the joint fluctuations of sizes and doubling times. We found that scaling relations based on the means collapse both size and doubling-time distributions across different conditions and explain how the shape of their joint fluctuations deviates from the means. Our data on these joint fluctuations highlight the importance of cell individuality: Single cells do not follow the dependence observed for the means between size and either growth rate or inverse doubling time. Our calculations show that these results emerge from a broad class of division control mechanisms requiring a certain scaling form of the "division hazard rate function," which defines the probability rate of dividing as a function of measurable parameters. This "model free" approach gives a rationale for the universal body-size distributions observed in microbial ecosystems across many microbial species, presumably dividing with multiple mechanisms. Additionally, our experiments show a crossover between fast and slow growth in the relation between individual-cell growth rate and division time, which can be understood in terms of different regimes of genome replication control.
Eye-size variability in deep-sea lanternfishes (Myctophidae): an ecological and phylogenetic study.
de Busserolles, Fanny; Fitzpatrick, John L; Paxton, John R; Marshall, N Justin; Collin, Shaun P
2013-01-01
One of the most common visual adaptations seen in the mesopelagic zone (200-1000 m), where the amount of light diminishes exponentially with depth and where bioluminescent organisms predominate, is the enlargement of the eye and pupil area. However, it remains unclear how eye size is influenced by depth, other environmental conditions and phylogeny. In this study, we determine the factors influencing variability in eye size and assess whether this variability is explained by ecological differences in habitat and lifestyle within a family of mesopelagic fishes characterized by broad intra- and interspecific variance in depth range and luminous patterns. We focus our study on the lanternfish family (Myctophidae) and hypothesise that lanternfishes with a deeper distribution and/or a reduction of bioluminescent emissions have smaller eyes and that ecological factors rather than phylogenetic relationships will drive the evolution of the visual system. Eye diameter and standard length were measured in 237 individuals from 61 species of lanternfishes representing all the recognised tribes within the family in addition to compiling an ecological dataset including depth distribution during night and day and the location and sexual dimorphism of luminous organs. Hypotheses were tested by investigating the relationship between the relative size of the eye (corrected for body size) and variations in depth and/or patterns of luminous-organs using phylogenetic comparative analyses. Results show a great variability in relative eye size within the Myctophidae at all taxonomic levels (from subfamily to genus), suggesting that this character may have evolved several times. However, variability in eye size within the family could not be explained by any of our ecological variables (bioluminescence and depth patterns), and appears to be driven solely by phylogenetic relationships.
Kokalj, Anita Jemec; Kunej, Urban; Skalar, Tina
2018-06-08
This study investigated four different environmentally relevant microplastic (MP) pollutants which were derived from two facial cleansers, a plastic bag and polyethylene textile fleece. The mean size range of the particles (according to number distribution) was 20-250 μm when measured as a powder and 0.02-200 μm in suspension. In all MP exposures, plastic particles were found inside the guts of D. magna and A. franciscana, but only in the case of daphnids a clear exponential correlation between MP uptake in the gut and the size of the MP was identified. Exposure tests in which the majority of the MP particles were below 100 μm in size also had higher numbers of daphnids displaying evidence of MP ingestion. As the average MP particle size increased, the percentage of daphnids which had MP in their gut decreased. Using a number distribution value to measure particle size when in a suspension is more experimentally relevant as it provides a more realistic particle size than when samples are measured as a powder. Generally, artemias had fewer MP particles in the gut, than the daphnids, which could be explained by their different food size preferences. No acute effects on D. magna were found, but the growth of A. franciscana was affected. We conclude that zooplankton crustacean can ingest various MPs but none of the exposures tested were highly acutely hazardous to the test species. In addition, no delayed lethal effects in a 24 h post-exposure period were found. Copyright © 2018 Elsevier Ltd. All rights reserved.
Eye-Size Variability in Deep-Sea Lanternfishes (Myctophidae): An Ecological and Phylogenetic Study
de Busserolles, Fanny; Fitzpatrick, John L.; Paxton, John R.; Marshall, N. Justin; Collin, Shaun P.
2013-01-01
One of the most common visual adaptations seen in the mesopelagic zone (200–1000 m), where the amount of light diminishes exponentially with depth and where bioluminescent organisms predominate, is the enlargement of the eye and pupil area. However, it remains unclear how eye size is influenced by depth, other environmental conditions and phylogeny. In this study, we determine the factors influencing variability in eye size and assess whether this variability is explained by ecological differences in habitat and lifestyle within a family of mesopelagic fishes characterized by broad intra- and interspecific variance in depth range and luminous patterns. We focus our study on the lanternfish family (Myctophidae) and hypothesise that lanternfishes with a deeper distribution and/or a reduction of bioluminescent emissions have smaller eyes and that ecological factors rather than phylogenetic relationships will drive the evolution of the visual system. Eye diameter and standard length were measured in 237 individuals from 61 species of lanternfishes representing all the recognised tribes within the family in addition to compiling an ecological dataset including depth distribution during night and day and the location and sexual dimorphism of luminous organs. Hypotheses were tested by investigating the relationship between the relative size of the eye (corrected for body size) and variations in depth and/or patterns of luminous-organs using phylogenetic comparative analyses. Results show a great variability in relative eye size within the Myctophidae at all taxonomic levels (from subfamily to genus), suggesting that this character may have evolved several times. However, variability in eye size within the family could not be explained by any of our ecological variables (bioluminescence and depth patterns), and appears to be driven solely by phylogenetic relationships. PMID:23472203
Estimating regional centile curves from mixed data sources and countries.
van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J
2009-10-15
Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.
Application of Krylov exponential propagation to fluid dynamics equations
NASA Technical Reports Server (NTRS)
Saad, Youcef; Semeraro, David
1991-01-01
An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.
NASA Astrophysics Data System (ADS)
Rigby, J. R.; Johnson, T. L.; Sharon, K.; Whitaker, K.; Gladders, M. D.; Florian, M.; Lotz, J.; Bayliss, M.; Wuyts, E.
2017-07-01
For lensed galaxy SGAS J111020.0+645950.8 at redshift z = 2.481, which is magnified by a factor of 28 ± 8, we analyze the morphology of star formation, as traced by rest-frame ultraviolet emission, in both the highly magnified source plane and simulations of how this galaxy would appear without lensing magnification. Were this galaxy not lensed, but rather drawn from a Hubble Space Telescope deep field, we would conclude that almost all its star formation arises from an exponential disk (Sérsic index of 1.0 ± 0.4) with an effective radius of {r}e=2.7+/- 0.3 {kpc} measured from two-dimensional fitting to F606W using Galfit, and {r}e=1.9+/- 0.1 {kpc} measured by fitting a radial profile to F606W elliptical isophotes. At the normal spatial resolution of the deep fields, there is no sign of clumpy star formation within SGAS J111020.0+645950.8. However, the enhanced spatial resolution enabled by gravitational lensing tells a very different story; much of the star formation arises in two dozen clumps with sizes of r = 30-50 pc spread across the 7 kpc length of the galaxy. The color and spatial distribution of the diffuse component suggests that still-smaller clumps are unresolved. Despite this clumpy, messy morphology, the radial profile is still well-characterized by an exponential profile. In this lensed galaxy, stars are forming in complexes with sizes well below 100 pc such sizes are wholly unexplored by surveys of galaxy evolution at 1< z< 3.
A quantitative analysis of the state of knowledge of turtles of the United States and Canada
Lovich, Jeffrey E.; Ennen, Joshua R.
2013-01-01
The “information age” ushered in an explosion of knowledge and access to knowledge that continues to revolutionize society. Knowledge about turtles, as measured by number of published papers, has been growing at an exponential rate since the early 1970s, a phenomenon mirrored in all scientific disciplines. Although knowledge about turtles, as measured by number of citations for papers in scientific journals, has been growing rapidly, this taxonomic group remains highly imperiled suggesting that knowledge is not always successfully translated into effective conservation of turtles. We reviewed the body of literature on turtles of the United States and Canada and found that: 1) the number of citations is biased toward large-bodied species, 2) the number of citations is biased toward wide-ranging species, and 3) conservation status has little effect on the accumulation of knowledge for a species, especially after removing the effects of body size or range size. The dispersion of knowledge, measured by Shannon Weiner diversity and evenness indices across species, was identical from 1994 to 2009 suggesting that poorly studied species remained poorly-studied species while well-studied species remained well studied. Several species listed as threatened or endangered under the U.S. Endangered Species Act (e.g., Pseudemys alabamensis, Sternotherus depressus, and Graptemys oculifera) remain poorly studied with the estimated number of citations for each ranging from only 13-24. The low number of citations for these species could best be explained by their restricted distribution and/or their smaller size. Despite the exponential increase in knowledge of turtles in the United States and Canada, no species of turtle listed under the Endangered Species Act has ever been delisted for reason of recovery. Therefore, increased knowledge does not necessarily contribute appreciably to recovery of threatened turtles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Guo, Zhenyuan; Yang, Shaofu; Wang, Jun
2016-12-01
This paper presents theoretical results on global exponential synchronization of multiple memristive neural networks in the presence of external noise by means of two types of distributed pinning control. The multiple memristive neural networks are coupled in a general structure via a nonlinear function, which consists of a linear diffusive term and a discontinuous sign term. A pinning impulsive control law is introduced in the coupled system to synchronize all neural networks. Sufficient conditions are derived for ascertaining global exponential synchronization in mean square. In addition, a pinning adaptive control law is developed to achieve global exponential synchronization in mean square. Both pinning control laws utilize only partial state information received from the neighborhood of the controlled neural network. Simulation results are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun
2014-02-01
We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.
Size Effect of the 2-D Bodies on the Geothermal Gradient and Q-A Plot
NASA Astrophysics Data System (ADS)
Thakur, M.; Blackwell, D. D.
2009-12-01
Using numerical models we have investigated some of the criticisms on the Q-A plot of related to the effect of size of the body on the slope and reduced heat flow. The effects of horizontal conduction depend on the relative difference of radioactivity between the body and the country rock (assuming constant thermal conductivity). Horizontal heat transfer due to different 2-D bodies was numerically studied in order to quantify resulting temperature differences at the Moho and errors on the predication of Qr (reduced heat flow). Using the two end member distributions of radioactivity, the step model (thickness 10km) and exponential model, different 2-D models of horizontal scale (width) ranging from 10 -500 km were investigated. Increasing the horizontal size of the body tends to move observations closer towards the 1-D solution. A temperature difference of 50 oC is produced (for the step model) at Moho between models of width 10 km versus 500 km. In other words the 1-D solution effectively provides large scale averaging in terms of heat flow and temperature field in the lithosphere. For bodies’ ≤ 100 km wide the geotherms at shallower levels are affected, but at depth they converge and are 50 oC lower than that of the infinite plate model temperature. In case of 2-D bodies surface heat flow is decreased due to horizontal transfer of heat, which will shift the Q-A point vertically downward on the Q-A plot. The smaller the size of the body, the more will be the deviation from the 1-D solution and the more will be the movement of Q-A point downwards on a Q-A plot. On the Q-A plot, a limited points of bodies of different sizes with different radioactivity contrast (for the step and exponential model), exactly reproduce the reduced heat flow Qr. Thus the size of the body can affect the slope on a Q-A plot but Qr is not changed. Therefore, Qr ~ 32 mWm-2 obtained from the global terrain average Q-A plot represents the best estimate of stable continental mantle heat flow.
Statistics of opinion domains of the majority-vote model on a square lattice
NASA Astrophysics Data System (ADS)
Peres, Lucas R.; Fontanari, José F.
2010-10-01
The existence of juxtaposed regions of distinct cultures in spite of the fact that people’s beliefs have a tendency to become more similar to each other’s as the individuals interact repeatedly is a puzzling phenomenon in the social sciences. Here we study an extreme version of the frequency-dependent bias model of social influence in which an individual adopts the opinion shared by the majority of the members of its extended neighborhood, which includes the individual itself. This is a variant of the majority-vote model in which the individual retains its opinion in case there is a tie among the neighbors’ opinions. We assume that the individuals are fixed in the sites of a square lattice of linear size L and that they interact with their nearest neighbors only. Within a mean-field framework, we derive the equations of motion for the density of individuals adopting a particular opinion in the single-site and pair approximations. Although the single-site approximation predicts a single opinion domain that takes over the entire lattice, the pair approximation yields a qualitatively correct picture with the coexistence of different opinion domains and a strong dependence on the initial conditions. Extensive Monte Carlo simulations indicate the existence of a rich distribution of opinion domains or clusters, the number of which grows with L2 whereas the size of the largest cluster grows with lnL2 . The analysis of the sizes of the opinion domains shows that they obey a power-law distribution for not too large sizes but that they are exponentially distributed in the limit of very large clusters. In addition, similarly to other well-known social influence model—Axelrod’s model—we found that these opinion domains are unstable to the effect of a thermal-like noise.
Labra, Fabio A; Hernández-Miranda, Eduardo; Quiñones, Renato A
2015-01-01
We study the temporal variation in the empirical relationships among body size (S), species richness (R), and abundance (A) in a shallow marine epibenthic faunal community in Coliumo Bay, Chile. We also extend previous analyses by calculating individual energy use (E) and test whether its bivariate and trivariate relationships with S and R are in agreement with expectations derived from the energetic equivalence rule. Carnivorous and scavenger species representing over 95% of sample abundance and biomass were studied. For each individual, body size (g) was measured and E was estimated following published allometric relationships. Data for each sample were tabulated into exponential body size bins, comparing species-averaged values with individual-based estimates which allow species to potentially occupy multiple size classes. For individual-based data, both the number of individuals and species across body size classes are fit by a Weibull function rather than by a power law scaling. Species richness is also a power law of the number of individuals. Energy use shows a piecewise scaling relationship with body size, with energetic equivalence holding true only for size classes above the modal abundance class. Species-based data showed either weak linear or no significant patterns, likely due to the decrease in the number of data points across body size classes. Hence, for individual-based size spectra, the SRA relationship seems to be general despite seasonal forcing and strong disturbances in Coliumo Bay. The unimodal abundance distribution results in a piecewise energy scaling relationship, with small individuals showing a positive scaling and large individuals showing energetic equivalence. Hence, strict energetic equivalence should not be expected for unimodal abundance distributions. On the other hand, while species-based data do not show unimodal SRA relationships, energy use across body size classes did not show significant trends, supporting energetic equivalence. PMID:25691966
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Santi, D. N.; Purnaba, I. G. P.; Mangku, I. W.
2016-01-01
Bonus-Malus system is said to be optimal if it is financially balanced for insurance companies and fair for policyholders. Previous research about Bonus-Malus system concern with the determination of the risk premium which applied to all of the severity that guaranteed by the insurance company. In fact, not all of the severity that proposed by policyholder may be covered by insurance company. When the insurance company sets a maximum bound of the severity incurred, so it is necessary to modify the model of the severity distribution into the severity bound distribution. In this paper, optimal Bonus-Malus system is compound of claim frequency component has geometric distribution and severity component has truncated Weibull distribution is discussed. The number of claims considered to follow a Poisson distribution, and the expected number λ is exponentially distributed, so the number of claims has a geometric distribution. The severity with a given parameter θ is considered to have a truncated exponential distribution is modelled using the Levy distribution, so the severity have a truncated Weibull distribution.
Geometrical effects on the electron residence time in semiconductor nano-particles.
Koochi, Hakimeh; Ebrahimi, Fatemeh
2014-09-07
We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ(r) in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r(2) model) or through the whole particle (r(3) model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW) simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ(r). It has been observed that by increasing the coordination number n, the average value of electron residence time, τ̅(r) rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ̅(r) is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ̅(r). Our simulations indicate that for volume distribution of traps, τ̅(r) scales as d(2). For a surface distribution of traps τ(r) increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
Apparent Anomalous Diffusion in the Cytoplasm of Human Cells: The Effect of Probes' Polydispersity.
Kalwarczyk, Tomasz; Kwapiszewska, Karina; Szczepanski, Krzysztof; Sozanski, Krzysztof; Szymanski, Jedrzej; Michalska, Bernadeta; Patalas-Krawczyk, Paulina; Duszynski, Jerzy; Holyst, Robert
2017-10-26
This work, based on in vivo and in vitro measurements, as well as in silico simulations, provides a consistent analysis of diffusion of polydisperse nanoparticles in the cytoplasm of living cells. Using the example of fluorescence correlation spectroscopy (FCS), we show the effect of polydispersity of probes on the experimental results. Although individual probes undergo normal diffusion, in the ensemble of probes, an effective broadening of the distribution of diffusion times occurs-similar to anomalous diffusion. We introduced fluorescently labeled dextrans into the cytoplasm of HeLa cells and found that cytoplasmic hydrodynamic drag, exponentially dependent on probe size, extraordinarily broadens the distribution of diffusion times across the focal volume. As a result, the in vivo FCS data were effectively fitted with the anomalous subdiffusion model while for a monodisperse probe the normal diffusion model was most suitable. Diffusion time obtained from the anomalous diffusion model corresponds to a probe whose size is determined by the weight-average molecular weight of the polymer. The apparent anomaly exponent decreases with increasing polydispersity of the probes. Our results and methodology can be applied in intracellular studies of the mobility of nanoparticles, polymers, or oligomerizing proteins.
HPC-Microgels: New Look at Structure and Dynamics
NASA Astrophysics Data System (ADS)
McKenna, John; Streletzky, Kiril; Mohieddine, Rami
2006-10-01
Issues remain unresolved in targeted chemotherapy including: an inability to effectively target cancerous tissue, the loss of low molecular weight medicines to the RES system, the high cytotoxicity of currently used drug carriers, and the inability to control the release of medicines upon arrival to the target. Hydroxy-propyl cellulose(HPC) microgels may be able to surmount these obstacles. HPC is a high molecular weight polymer with low cytotoxicity and a critical temperature around 41C. We cross-linked HPC polymer chains to produce microgel nanoparticles and studied their structure and dynamics using Dynamic Light Scattering spectroscopy. The complex nature of the fluid and large size distribution of the particles renders typical characterization algorithm CONTIN ineffective and inconsistent. Instead, the particles spectra have been fit to a sum of stretched exponentials. Each term offers three parameters for analysis and represents a single mode. The results of this analysis show that the microgels undergo a multi to uni-modal transition around 41C. The CONTIN size distribution analysis shows similar results, but these come with much less consistency and resolution. During the phase transition it is found that the microgel particles actually shrink. This property might be particularly useful for controlled drug delivery and release.
Sample size calculation for studies with grouped survival data.
Li, Zhiguo; Wang, Xiaofei; Wu, Yuan; Owzar, Kouros
2018-06-10
Grouped survival data arise often in studies where the disease status is assessed at regular visits to clinic. The time to the event of interest can only be determined to be between two adjacent visits or is right censored at one visit. In data analysis, replacing the survival time with the endpoint or midpoint of the grouping interval leads to biased estimators of the effect size in group comparisons. Prentice and Gloeckler developed a maximum likelihood estimator for the proportional hazards model with grouped survival data and the method has been widely applied. Previous work on sample size calculation for designing studies with grouped data is based on either the exponential distribution assumption or the approximation of variance under the alternative with variance under the null. Motivated by studies in HIV trials, cancer trials and in vitro experiments to study drug toxicity, we develop a sample size formula for studies with grouped survival endpoints that use the method of Prentice and Gloeckler for comparing two arms under the proportional hazards assumption. We do not impose any distributional assumptions, nor do we use any approximation of variance of the test statistic. The sample size formula only requires estimates of the hazard ratio and survival probabilities of the event time of interest and the censoring time at the endpoints of the grouping intervals for one of the two arms. The formula is shown to perform well in a simulation study and its application is illustrated in the three motivating examples. Copyright © 2018 John Wiley & Sons, Ltd.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Haiduc, Adrian Marius; van Duynhoven, John
2005-02-01
The porous properties of food materials are known to determine important macroscopic parameters such as water-holding capacity and texture. In conventional approaches, understanding is built from a long process of establishing macrostructure-property relations in a rational manner. Only recently, multivariate approaches were introduced for the same purpose. The model systems used here are oil-in-water emulsions, stabilised by protein, and form complex structures, consisting of fat droplets dispersed in a porous protein phase. NMR time-domain decay curves were recorded for emulsions with varied levels of fat, protein and water. Hardness, dry matter content and water drainage were determined by classical means and analysed for correlation with the NMR data with multivariate techniques. Partial least squares can calibrate and predict these properties directly from the continuous NMR exponential decays and yields regression coefficients higher than 82%. However, the calibration coefficients themselves belong to the continuous exponential domain and do little to explain the connection between NMR data and emulsion properties. Transformation of the NMR decays into a discreet domain with non-negative least squares permits the use of multilinear regression (MLR) on the resulting amplitudes as predictors and hardness or water drainage as responses. The MLR coefficients show that hardness is highly correlated with the components that have T2 distributions of about 20 and 200 ms whereas water drainage is correlated with components that have T2 distributions around 400 and 1800 ms. These T2 distributions very likely correlate with water populations present in pores with different sizes and/or wall mobility. The results for the emulsions studied demonstrate that NMR time-domain decays can be employed to predict properties and to provide insight in the underlying microstructural features.
NASA Astrophysics Data System (ADS)
Fermi, Enrico
The Patent contains an extremely detailed description of an atomic pile employing natural uranium as fissile material and graphite as moderator. It starts with the discussion of the theory of the intervening phenomena, in particular the evaluation of the reproduction or multiplication factor, K, that is the ratio of the number of fast neutrons produced in one generation by the fissions to the original number of fast neutrons, in a system of infinite size. The possibility of having a self-maintaining chain reaction in a system of finite size depends both on the facts that K is greater than unity and the overall size of the system is sufficiently large to minimize the percentage of neutrons escaping from the system. After the description of a possible realization of such a pile (with many detailed drawings), the various kinds of neutron losses in a pile are depicted. Particularly relevant is the reported "invention" of the exponential experiment: since theoretical calculations can determine whether or not a chain reaction will occur in a give system, but can be invalidated by uncertainties in the parameters of the problem, an experimental test of the pile is proposed, aimed at ascertaining if the pile under construction would be divergent (i.e. with a neutron multiplication factor K greater than 1) by making measurements on a smaller pile. The idea is to measure, by a detector containing an indium foil, the exponential decrease of the neutron density along the length of a column of uranium-graphite lattice, where a neutron source is placed near its base. Such an exponential decrease is greater or less than that expected due to leakage, according to whether the K factor is less or greater than 1, so that this experiment is able to test the criticality of the pile, its accuracy increasing with the size of the column. In order to perform this measure a mathematical description of the effect of neutron production, diffusion, and absorption on the neutron density in the structure is given. In particular, a mathematical formula is given for the neutron density distribution at various points throughout a pile with a parallelepiped geometry. This expression is discussed in details, with regard to its possible approximate form and the corrections needed when the approximations are not valid. From the discussion, a definition of the so-called "critical size" emerges, that is the size for which the total surface to volume ratio of the entire pile is such that the rate of neutron loss is reduced to a value less than the rate of neutron production; the expressions of the critical radius for various geometrical structures are given. Finally, it is reported in detail a measurement of the neutron density in a pile built with a rectangular geometrical arrangement of uranium boxes in graphite blocks. As a result of this "invention", it was proposed a structure with an actual K factor of 1.054: this structure was built and operated at various rates of power production in the form of heat. For the present Patent, there is no "reference" published article, although some material appears also in [Anderson (1942b)] of March 26, 1942. More in general, some results are as well present in several papers of Volume II of the Fermi Collected Papers [Fermi (1962)] but many details (including several figures) are reported only in the present Patent.
Work and heat fluctuations in two-state systems: a trajectory thermodynamics formalism
NASA Astrophysics Data System (ADS)
Ritort, F.
2004-10-01
Two-state models provide phenomenological descriptions of many different systems, ranging from physics to chemistry and biology. We investigate work fluctuations in an ensemble of two-state systems driven out of equilibrium under the action of an external perturbation. We calculate the probability density PN(W) that work equal to W is exerted upon the system (of size N) along a given non-equilibrium trajectory and introduce a trajectory thermodynamics formalism to quantify work fluctuations in the large-N limit. We then define a trajectory entropy SN(W) that counts the number of non-equilibrium trajectories PN(W) = exp(SN(W)/kBT) with work equal to W and characterizes fluctuations of work trajectories around the most probable value Wmp. A trajectory free energy {\\cal F}_N(W) can also be defined, which has a minimum at W = W†, this being the value of the work that has to be efficiently sampled to quantitatively test the Jarzynski equality. Within this formalism a Lagrange multiplier is also introduced, the inverse of which plays the role of a trajectory temperature. Our general solution for PN(W) exactly satisfies the fluctuation theorem by Crooks and allows us to investigate heat fluctuations for a protocol that is invariant under time reversal. The heat distribution is then characterized by a Gaussian component (describing small and frequent heat exchange events) and exponential tails (describing the statistics of large deviations and rare events). For the latter, the width of the exponential tails is related to the aforementioned trajectory temperature. Finite-size effects to the large-N theory and the recovery of work distributions for finite N are also discussed. Finally, we pay particular attention to the case of magnetic nanoparticle systems under the action of a magnetic field H where work and heat fluctuations are predicted to be observable in ramping experiments in micro-SQUIDs.
Analysis of two production inventory systems with buffer, retrials and different production rates
NASA Astrophysics Data System (ADS)
Jose, K. P.; Nair, Salini S.
2017-09-01
This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leo, Mario, E-mail: mario.leo@le.infn.it; Leo, Rosario Antonio, E-mail: leora@le.infn.it; Tempesta, Piergiulio, E-mail: p.tempesta@fis.ucm.es
2013-06-15
In a recent paper [M. Leo, R.A. Leo, P. Tempesta, C. Tsallis, Phys. Rev. E 85 (2012) 031149], the existence of quasi-stationary states for the Fermi–Pasta–Ulam β system has been shown numerically, by analyzing the stability properties of the N/4-mode exact nonlinear solution. Here we study the energy distribution of the modes N/4, N/3 and N/2, when they are unstable, as a function of N and of the initial excitation energy. We observe that the classical Boltzmann weight is replaced by a different weight, expressed by a q-exponential function. -- Highlights: ► New statistical properties of the Fermi–Pasta–Ulam beta systemmore » are found. ► The energy distribution of specific observables are studied: a deviation from the standard Boltzmann behavior is found. ► A q-exponential weight should be used instead. ► The classical exponential weight is restored in the large particle limit (mesoscopic nature of the phenomenon)« less
NASA Astrophysics Data System (ADS)
Shankman, C.; Kavelaars, JJ.; Gladman, B. J.; Alexandersen, M.; Kaib, N.; Petit, J.-M.; Bannister, M. T.; Chen, Y.-T.; Gwyn, S.; Jakubik, M.; Volk, K.
2016-02-01
We measure the absolute magnitude, H, distribution, dN(H) ∝ 10 αH , of the scattering Trans-Neptunian Objects (TNOs) as a proxy for their size-frequency distribution. We show that the H-distribution of the scattering TNOs is not consistent with a single-slope distribution, but must transition around H g ˜ 9 to either a knee with a shallow slope or to a divot, which is a differential drop followed by second exponential distribution. Our analysis is based on a sample of 22 scattering TNOs drawn from three different TNO surveys—the Canada-France Ecliptic Plane Survey, Alexandersen et al., and the Outer Solar System Origins Survey, all of which provide well-characterized detection thresholds—combined with a cosmogonic model for the formation of the scattering TNO population. Our measured absolute magnitude distribution result is independent of the choice of cosmogonic model. Based on our analysis, we estimate that the number of scattering TNOs is (2.4-8.3) × 105 for H r < 12. A divot H-distribution is seen in a variety of formation scenarios and may explain several puzzles in Kuiper Belt science. We find that a divot H-distribution simultaneously explains the observed scattering TNO, Neptune Trojan, Plutino, and Centaur H-distributions while simultaneously predicting a large enough scattering TNO population to act as the sole supply of the Jupiter-Family Comets.
Spatial analysis of soil organic carbon in Zhifanggou catchment of the Loess Plateau.
Li, Mingming; Zhang, Xingchang; Zhen, Qing; Han, Fengpeng
2013-01-01
Soil organic carbon (SOC) reflects soil quality and plays a critical role in soil protection, food safety, and global climate changes. This study involved grid sampling at different depths (6 layers) between 0 and 100 cm in a catchment. A total of 1282 soil samples were collected from 215 plots over 8.27 km(2). A combination of conventional analytical methods and geostatistical methods were used to analyze the data for spatial variability and soil carbon content patterns. The mean SOC content in the 1282 samples from the study field was 3.08 g · kg(-1). The SOC content of each layer decreased with increasing soil depth by a power function relationship. The SOC content of each layer was moderately variable and followed a lognormal distribution. The semi-variograms of the SOC contents of the six different layers were fit with the following models: exponential, spherical, exponential, Gaussian, exponential, and exponential, respectively. A moderate spatial dependence was observed in the 0-10 and 10-20 cm layers, which resulted from stochastic and structural factors. The spatial distribution of SOC content in the four layers between 20 and 100 cm exhibit were mainly restricted by structural factors. Correlations within each layer were observed between 234 and 562 m. A classical Kriging interpolation was used to directly visualize the spatial distribution of SOC in the catchment. The variability in spatial distribution was related to topography, land use type, and human activity. Finally, the vertical distribution of SOC decreased. Our results suggest that the ordinary Kriging interpolation can directly reveal the spatial distribution of SOC and the sample distance about this study is sufficient for interpolation or plotting. More research is needed, however, to clarify the spatial variability on the bigger scale and better understand the factors controlling spatial variability of soil carbon in the Loess Plateau region.
Non-extensive quantum statistics with particle-hole symmetry
NASA Astrophysics Data System (ADS)
Biró, T. S.; Shen, K. M.; Zhang, B. W.
2015-06-01
Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.
NASA Astrophysics Data System (ADS)
Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf
2018-01-01
We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.
Crack problem in superconducting cylinder with exponential distribution of critical-current density
NASA Astrophysics Data System (ADS)
Zhao, Yufeng; Xu, Chi; Shi, Liang
2018-04-01
The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.
NASA Astrophysics Data System (ADS)
Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro
2016-04-01
High temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. Heat-wave indicators have been mainly developed with the aim of capturing the potential impacts on specific sectors (agriculture, health, wildfires, transport, power generation and distribution). However, the ability to capture the occurrence of extreme temperature events is an essential property of a multi-hazard extreme climate indicator. Aim of this study is to develop a standardized heat-wave indicator, that can be combined with other indices in order to describe multiple hazards in a single indicator. The proposed approach can be used in order to have a quantified indicator of the strenght of a certain extreme. As a matter of fact, extremes are usually distributed in exponential or exponential-exponential functions and it is difficult to quickly asses how strong was an extreme events considering only its magnitude. The proposed approach simplify the quantitative and qualitative communication of extreme magnitude
The Physical State of Emulsified Edible Oil Modulates Its in Vitro Digestion.
Guo, Qing; Bellissimo, Nick; Rousseau, Dérick
2017-10-18
Emulsified lipid digestion was tailored by manipulating the physical state of dispersed oil droplets in whey protein stabilized oil-in-water (O/W) emulsions, where the oil phase consisted of one of five ratios of soybean oil (SO) and fully hydrogenated soybean oil (FHSO). The evolution in particle size distribution, structural changes during oral, gastric, and intestinal digestion, and free fatty acid release during intestinal digestion were all investigated. Irrespective of the physical state and structure of the dispersed oil/fat, all emulsions were stable against droplet size increases during oral digestion. During gastric digestion, the 50:50 SO:FHSO emulsion was more stable against physical breakdown than any other emulsion. All emulsions underwent flocculation and coalescence or partial coalescence upon intestinal digestion, with the SO emulsion being hydrolyzed the most rapidly. The melting point of all emulsions containing FHSO was above 37 °C, with the presence of solid fat within the dispersed oil droplets greatly limiting lipolysis. Fat crystal polymorph and nanoplatelet size did not play an important role in the rate and extent of lipid digestion. Free fatty acid release modeled by the Weibull distribution function showed that the rate of lipid digestion (κ) decreased with increasing solid fat content, and followed an exponential relationship (R 2 = 0.95). Overall, lipid digestion was heavily altered by the physical state of the dispersed oil phase within O/W emulsions.
NASA Astrophysics Data System (ADS)
Allen, Linda J. S.
2016-09-01
Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
NASA Astrophysics Data System (ADS)
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
Effect of the state of internal boundaries on granite fracture nature under quasi-static compression
NASA Astrophysics Data System (ADS)
Damaskinskaya, E. E.; Panteleev, I. A.; Kadomtsev, A. G.; Naimark, O. B.
2017-05-01
Based on an analysis of the spatial distribution of hypocenters of acoustic emission signal sources and an analysis of the energy distributions of acoustic emission signals, the effect of the liquid phase and a weak electric field on the spatiotemporal nature of granite sample fracture is studied. Experiments on uniaxial compression of granite samples of natural moisture showed that the damage accumulation process is twostage: disperse accumulation of damages is followed by localized accumulation of damages in the formed macrofracture nucleus region. In energy distributions of acoustic emission signals, this transition is accompanied by a change in the distribution shape from exponential to power-law. Granite water saturation qualitatively changes the damage accumulation nature: the process is delocalized until macrofracture with the exponential energy distribution of acoustic emission signals. An exposure to a weak electric field results in a selective change in the damage accumulation nature in the sample volume.
Turbulence hierarchy in a random fibre laser
González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman
2017-01-01
Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064
Global exponential stability analysis on impulsive BAM neural networks with distributed delays
NASA Astrophysics Data System (ADS)
Li, Yao-Tang; Yang, Chang-Bo
2006-12-01
Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.
NASA Astrophysics Data System (ADS)
Zhou, distributed delays [rapid communication] T.; Chen, A.; Zhou, Y.
2005-08-01
By using the continuation theorem of coincidence degree theory and Liapunov function, we obtain some sufficient criteria to ensure the existence and global exponential stability of periodic solution to the bidirectional associative memory (BAM) neural networks with periodic coefficients and continuously distributed delays. These results improve and generalize the works of papers [J. Cao, L. Wang, Phys. Rev. E 61 (2000) 1825] and [Z. Liu, A. Chen, J. Cao, L. Huang, IEEE Trans. Circuits Systems I 50 (2003) 1162]. An example is given to illustrate that the criteria are feasible.
On the minimum of independent geometrically distributed random variables
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David
1994-01-01
The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.
NASA Astrophysics Data System (ADS)
Bakoban, Rana A.
2017-08-01
The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.
Brittle-to-ductile transition in a fiber bundle with strong heterogeneity.
Kovács, Kornél; Hidalgo, Raul Cruz; Pagonabarraga, Ignacio; Kun, Ferenc
2013-04-01
We analyze the failure process of a two-component system with widely different fracture strength in the framework of a fiber bundle model with localized load sharing. A fraction 0≤α≤1 of the bundle is strong and it is represented by unbreakable fibers, while fibers of the weak component have randomly distributed failure strength. Computer simulations revealed that there exists a critical composition α(c) which separates two qualitatively different behaviors: Below the critical point, the failure of the bundle is brittle, characterized by an abrupt damage growth within the breakable part of the system. Above α(c), however, the macroscopic response becomes ductile, providing stability during the entire breaking process. The transition occurs at an astonishingly low fraction of strong fibers which can have importance for applications. We show that in the ductile phase, the size distribution of breaking bursts has a power law functional form with an exponent μ=2 followed by an exponential cutoff. In the brittle phase, the power law also prevails but with a higher exponent μ=9/2. The transition between the two phases shows analogies to continuous phase transitions. Analyzing the microstructure of the damage, it was found that at the beginning of the fracture process cracks nucleate randomly, while later on growth and coalescence of cracks dominate, which give rise to power law distributed crack sizes.
Zeldovich, Konstantin B; Chen, Peiqiu; Shakhnovich, Boris E; Shakhnovich, Eugene I
2007-01-01
In this work we develop a microscopic physical model of early evolution where phenotype—organism life expectancy—is directly related to genotype—the stability of its proteins in their native conformations—which can be determined exactly in the model. Simulating the model on a computer, we consistently observe the “Big Bang” scenario whereby exponential population growth ensues as soon as favorable sequence–structure combinations (precursors of stable proteins) are discovered. Upon that, random diversity of the structural space abruptly collapses into a small set of preferred proteins. We observe that protein folds remain stable and abundant in the population at timescales much greater than mutation or organism lifetime, and the distribution of the lifetimes of dominant folds in a population approximately follows a power law. The separation of evolutionary timescales between discovery of new folds and generation of new sequences gives rise to emergence of protein families and superfamilies whose sizes are power-law distributed, closely matching the same distributions for real proteins. On the population level we observe emergence of species—subpopulations that carry similar genomes. Further, we present a simple theory that relates stability of evolving proteins to the sizes of emerging genomes. Together, these results provide a microscopic first-principles picture of how first-gene families developed in the course of early evolution. PMID:17630830
Zeldovich, Konstantin B; Chen, Peiqiu; Shakhnovich, Boris E; Shakhnovich, Eugene I
2007-07-01
In this work we develop a microscopic physical model of early evolution where phenotype--organism life expectancy--is directly related to genotype--the stability of its proteins in their native conformations-which can be determined exactly in the model. Simulating the model on a computer, we consistently observe the "Big Bang" scenario whereby exponential population growth ensues as soon as favorable sequence-structure combinations (precursors of stable proteins) are discovered. Upon that, random diversity of the structural space abruptly collapses into a small set of preferred proteins. We observe that protein folds remain stable and abundant in the population at timescales much greater than mutation or organism lifetime, and the distribution of the lifetimes of dominant folds in a population approximately follows a power law. The separation of evolutionary timescales between discovery of new folds and generation of new sequences gives rise to emergence of protein families and superfamilies whose sizes are power-law distributed, closely matching the same distributions for real proteins. On the population level we observe emergence of species--subpopulations that carry similar genomes. Further, we present a simple theory that relates stability of evolving proteins to the sizes of emerging genomes. Together, these results provide a microscopic first-principles picture of how first-gene families developed in the course of early evolution.
Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M
2013-02-01
A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.
Apparent power-law distributions in animal movements can arise from intraspecific interactions
Breed, Greg A.; Severns, Paul M.; Edwards, Andrew M.
2015-01-01
Lévy flights have gained prominence for analysis of animal movement. In a Lévy flight, step-lengths are drawn from a heavy-tailed distribution such as a power law (PL), and a large number of empirical demonstrations have been published. Others, however, have suggested that animal movement is ill fit by PL distributions or contend a state-switching process better explains apparent Lévy flight movement patterns. We used a mix of direct behavioural observations and GPS tracking to understand step-length patterns in females of two related butterflies. We initially found movement in one species (Euphydryas editha taylori) was best fit by a bounded PL, evidence of a Lévy flight, while the other (Euphydryas phaeton) was best fit by an exponential distribution. Subsequent analyses introduced additional candidate models and used behavioural observations to sort steps based on intraspecific interactions (interactions were rare in E. phaeton but common in E. e. taylori). These analyses showed a mixed-exponential is favoured over the bounded PL for E. e. taylori and that when step-lengths were sorted into states based on the influence of harassing conspecific males, both states were best fit by simple exponential distributions. The direct behavioural observations allowed us to infer the underlying behavioural mechanism is a state-switching process driven by intraspecific interactions rather than a Lévy flight. PMID:25519992
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels
NASA Astrophysics Data System (ADS)
Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan
2017-12-01
This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.
Race, gender and the econophysics of income distribution in the USA
NASA Astrophysics Data System (ADS)
Shaikh, Anwar; Papanikolaou, Nikolaos; Wiener, Noe
2014-12-01
The econophysics “two-class” theory of Yakovenko and his co-authors shows that the distribution of labor incomes is roughly exponential. This paper extends this result to US subgroups categorized by gender and race. It is well known that Males have higher average incomes than Females, and Whites have higher average incomes than African-Americans. It is also evident that social policies can affect these income gaps. Our surprising finding is that nonetheless intra-group distributions of pre-tax labor incomes are remarkably similar and remain close to exponential. This suggests that income inequality can be usefully addressed by taxation policies, and overall income inequality can be modified by also shifting the balance between labor and property incomes.
Diversity of individual mobility patterns and emergence of aggregated scaling laws
Yan, Xiao-Yong; Han, Xiao-Pu; Wang, Bing-Hong; Zhou, Tao
2013-01-01
Uncovering human mobility patterns is of fundamental importance to the understanding of epidemic spreading, urban transportation and other socioeconomic dynamics embodying spatiality and human travel. According to the direct travel diaries of volunteers, we show the absence of scaling properties in the displacement distribution at the individual level,while the aggregated displacement distribution follows a power law with an exponential cutoff. Given the constraint on total travelling cost, this aggregated scaling law can be analytically predicted by the mixture nature of human travel under the principle of maximum entropy. A direct corollary of such theory is that the displacement distribution of a single mode of transportation should follow an exponential law, which also gets supportive evidences in known data. We thus conclude that the travelling cost shapes the displacement distribution at the aggregated level. PMID:24045416
Porto, Markus; Roman, H Eduardo
2002-04-01
We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
Confinement Correction to Mercury Intrusion Capillary Pressure of Shale Nanopores
Wang, Sen; Javadpour, Farzam; Feng, Qihong
2016-01-01
We optimized potential parameters in a molecular dynamics model to reproduce the experimental contact angle of a macroscopic mercury droplet on graphite. With the tuned potential, we studied the effects of pore size, geometry, and temperature on the wetting of mercury droplets confined in organic-rich shale nanopores. The contact angle of mercury in a circular pore increases exponentially as pore size decreases. In conjunction with the curvature-dependent surface tension of liquid droplets predicted from a theoretical model, we proposed a technique to correct the common interpretation procedure of mercury intrusion capillary pressure (MICP) measurement for nanoporous material such as shale. Considering the variation of contact angle and surface tension with pore size improves the agreement between MICP and adsorption-derived pore size distribution, especially for pores having a radius smaller than 5 nm. The relative error produced in ignoring these effects could be as high as 44%—samples that contain smaller pores deviate more. We also explored the impacts of pore size and temperature on the surface tension and contact angle of water/vapor and oil/gas systems, by which the capillary pressure of water/oil/gas in shale can be obtained from MICP. This information is fundamental to understanding multiphase flow behavior in shale systems. PMID:26832445
Ding, Qian; Wang, Yong; Zhuang, Dafang
2018-04-15
The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
Unbiased simulation of near-Clifford quantum circuits
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.; ...
2017-06-28
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
Early stages of Ostwald ripening
NASA Astrophysics Data System (ADS)
Shneidman, Vitaly A.
2013-07-01
The Becker-Döring (BD) nucleation equation is known to predict a narrow double-exponential front (DEF) in the distribution of growing particles over sizes, which is due to early transient effects. When mass conservation is included, nucleation is eventually exhausted while independent growth is replaced by ripening. Despite the enormous difference in the associated time scales, and the resulting demand on numerics, within the generalized BD model the early DEF is shown to be crucial for the selection of the unique self-similar Lifshitz-Slyozov-Wagner asymptotic regime. Being preserved till the latest stages of growth, the DEF provides a universal part of the initial conditions for the ripening problem, regardless of the mass exchange mechanism between the nucleus and the matrix.
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
2010-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre
2003-03-01
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.
Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.
Thiébaut, Anne C M; Bénichou, Jacques
2004-12-30
Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.
Evaluation of Mean and Variance Integrals without Integration
ERIC Educational Resources Information Center
Joarder, A. H.; Omar, M. H.
2007-01-01
The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…
Exponential Sensitivity and its Cost in Quantum Physics
Gilyén, András; Kiss, Tamás; Jex, Igor
2016-01-01
State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed. PMID:26861076
Exponential Sensitivity and its Cost in Quantum Physics.
Gilyén, András; Kiss, Tamás; Jex, Igor
2016-02-10
State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed.
Biased phylodynamic inferences from analysing clusters of viral sequences
Xiang, Fei; Frost, Simon D. W.
2017-01-01
Abstract Phylogenetic methods are being increasingly used to help understand the transmission dynamics of measurably evolving viruses, including HIV. Clusters of highly similar sequences are often observed, which appear to follow a ‘power law’ behaviour, with a small number of very large clusters. These clusters may help to identify subpopulations in an epidemic, and inform where intervention strategies should be implemented. However, clustering of samples does not necessarily imply the presence of a subpopulation with high transmission rates, as groups of closely related viruses can also occur due to non-epidemiological effects such as over-sampling. It is important to ensure that observed phylogenetic clustering reflects true heterogeneity in the transmitting population, and is not being driven by non-epidemiological effects. We qualify the effect of using a falsely identified ‘transmission cluster’ of sequences to estimate phylodynamic parameters including the effective population size and exponential growth rate under several demographic scenarios. Our simulation studies show that taking the maximum size cluster to re-estimate parameters from trees simulated under a randomly mixing, constant population size coalescent process systematically underestimates the overall effective population size. In addition, the transmission cluster wrongly resembles an exponential or logistic growth model 99% of the time. We also illustrate the consequences of false clusters in exponentially growing coalescent and birth-death trees, where again, the growth rate is skewed upwards. This has clear implications for identifying clusters in large viral databases, where a false cluster could result in wasted intervention resources. PMID:28852573
Tuned critical avalanche scaling in bulk metallic glasses
Antonaglia, James; Xie, Xie; Schwarz, Gregory; ...
2014-03-17
In this study, ingots of the bulk metallic glass (BMG), Zr 64.13Cu 15.75Ni 10.12Al 10 in atomic percent (at. %), are compressed at slow strain rates. The deformation behavior is characterized by discrete, jerky stress-drop bursts (serrations). Here we present a quantitative theory for the serration behavior of BMGs, which is a critical issue for the understanding of the deformation characteristics of BMGs. The mean-field interaction model predicts the scaling behavior of the distribution, D(S), of avalanche sizes, S, in the experiments. D(S) follows a power law multiplied by an exponentially-decaying scaling function. The size of the largest observed avalanchemore » depends on experimental tuning-parameters, such as either imposed strain rate or stress. Similar to crystalline materials, the plasticity of BMGs reflects tuned criticality showing remarkable quantitative agreement with the slip statistics of slowly-compressed nanocrystals. The results imply that material-evaluation methods based on slip statistics apply to both crystalline and BMG materials.« less
Cast aluminium single crystals cross the threshold from bulk to size-dependent stochastic plasticity
NASA Astrophysics Data System (ADS)
Krebs, J.; Rao, S. I.; Verheyden, S.; Miko, C.; Goodall, R.; Curtin, W. A.; Mortensen, A.
2017-07-01
Metals are known to exhibit mechanical behaviour at the nanoscale different to bulk samples. This transition typically initiates at the micrometre scale, yet existing techniques to produce micrometre-sized samples often introduce artefacts that can influence deformation mechanisms. Here, we demonstrate the casting of micrometre-scale aluminium single-crystal wires by infiltration of a salt mould. Samples have millimetre lengths, smooth surfaces, a range of crystallographic orientations, and a diameter D as small as 6 μm. The wires deform in bursts, at a stress that increases with decreasing D. Bursts greater than 200 nm account for roughly 50% of wire deformation and have exponentially distributed intensities. Dislocation dynamics simulations show that single-arm sources that produce large displacement bursts halted by stochastic cross-slip and lock formation explain microcast wire behaviour. This microcasting technique may be extended to several other metals or alloys and offers the possibility of exploring mechanical behaviour spanning the micrometre scale.
Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco
2014-01-01
Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.
Weighted Scaling in Non-growth Random Networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li
2012-09-01
We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.
Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution
NASA Astrophysics Data System (ADS)
Zhao, Chen; Sichitiu, Mihail L.
Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
Time Correlations in Mode Hopping of Coupled Oscillators
NASA Astrophysics Data System (ADS)
Heltberg, Mathias L.; Krishna, Sandeep; Jensen, Mogens H.
2017-05-01
We study the dynamics in a system of coupled oscillators when Arnold Tongues overlap. By varying the initial conditions, the deterministic system can be attracted to different limit cycles. Adding noise, the mode hopping between different states become a dominating part of the dynamics. We simplify the system through a Poincare section, and derive a 1D model to describe the dynamics. We explain that for some parameter values of the external oscillator, the time distribution of occupancy in a state is exponential and thus memoryless. In the general case, on the other hand, it is a sum of exponential distributions characteristic of a system with time correlations.
Xu, Changjin; Li, Peiluan; Pang, Yicheng
2016-12-01
In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).
Holder, J P; Benedetti, L R; Bradley, D K
2016-11-01
Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.
The diffusion of a Ga atom on GaAs(001)β2(2 × 4): Local superbasin kinetic Monte Carlo
NASA Astrophysics Data System (ADS)
Lin, Yangzheng; Fichthorn, Kristen A.
2017-10-01
We use first-principles density-functional theory to characterize the binding sites and diffusion mechanisms for a Ga adatom on the GaAs(001)β 2(2 × 4) surface. Diffusion in this system is a complex process involving eleven unique binding sites and sixteen different hops between neighboring binding sites. Among the binding sites, we can identify four different superbasins such that the motion between binding sites within a superbasin is much faster than hops exiting the superbasin. To describe diffusion, we use a recently developed local superbasin kinetic Monte Carlo (LSKMC) method, which accelerates a conventional kinetic Monte Carlo (KMC) simulation by describing the superbasins as absorbing Markov chains. We find that LSKMC is up to 4300 times faster than KMC for the conditions probed in this study. We characterize the distribution of exit times from the superbasins and find that these are sometimes, but not always, exponential and we characterize the conditions under which the superbasin exit-time distribution should be exponential. We demonstrate that LSKMC simulations assuming an exponential superbasin exit-time distribution yield the same diffusion coefficients as conventional KMC.
NASA Astrophysics Data System (ADS)
Shih, Hong-Yan; Goldenfeld, Nigel
Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.
ERIC Educational Resources Information Center
Holland, Paul W.; Thayer, Dorothy T.
2000-01-01
Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…
Roccato, Anna; Uyttendaele, Mieke; Membré, Jeanne-Marie
2017-06-01
In the framework of food safety, when mimicking the consumer phase, the storage time and temperature used are mainly considered as single point estimates instead of probability distributions. This singlepoint approach does not take into account the variability within a population and could lead to an overestimation of the parameters. Therefore, the aim of this study was to analyse data on domestic refrigerator temperatures and storage times of chilled food in European countries in order to draw general rules which could be used either in shelf-life testing or risk assessment. In relation to domestic refrigerator temperatures, 15 studies provided pertinent data. Twelve studies presented normal distributions, according to the authors or from the data fitted into distributions. Analysis of temperature distributions revealed that the countries were separated into two groups: northern European countries and southern European countries. The overall variability of European domestic refrigerators is described by a normal distribution: N (7.0, 2.7)°C for southern countries, and, N (6.1, 2.8)°C for the northern countries. Concerning storage times, seven papers were pertinent. Analysis indicated that the storage time was likely to end in the first days or weeks (depending on the product use-by-date) after purchase. Data fitting showed the exponential distribution was the most appropriate distribution to describe the time that food spent at consumer's place. The storage time was described by an exponential distribution corresponding to the use-by date period divided by 4. In conclusion, knowing that collecting data is time and money consuming, in the absence of data, and at least for the European market and for refrigerated products, building a domestic refrigerator temperature distribution using a Normal law and a time-to-consumption distribution using an Exponential law would be appropriate. Copyright © 2017 Elsevier Ltd. All rights reserved.
Network topology of an experimental futures exchange
NASA Astrophysics Data System (ADS)
Wang, S. C.; Tseng, J. J.; Tai, C. C.; Lai, K. H.; Wu, W. S.; Chen, S. H.; Li, S. P.
2008-03-01
Many systems of different nature exhibit scale free behaviors. Economic systems with power law distribution in the wealth are one of the examples. To better understand the working behind the complexity, we undertook an experiment recording the interactions between market participants. A Web server was setup to administer the exchange of futures contracts whose liquidation prices were coupled to event outcomes. After free registration, participants started trading to compete for the money prizes upon maturity of the futures contracts at the end of the experiment. The evolving `cash' flow network was reconstructed from the transactions between players. We show that the network topology is hierarchical, disassortative and small-world with a power law exponent of 1.02±0.09 in the degree distribution after an exponential decay correction. The small-world property emerged early in the experiment while the number of participants was still small. We also show power law-like distributions of the net incomes and inter-transaction time intervals. Big winners and losers are associated with high degree, high betweenness centrality, low clustering coefficient and low degree-correlation. We identify communities in the network as groups of the like-minded. The distribution of the community sizes is shown to be power-law distributed with an exponent of 1.19±0.16.
Déjardin, P
2013-08-30
The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.
Long-term Study of the Light Curve of PKS 1510-089 in GeV Energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prince, Raj; Gupta, Nayantara; Majumdar, Pratik, E-mail: rajprince@rri.res.in
2017-07-20
We have analyzed data from the flat-spectrum radio quasar PKS 1510-089 collected over a period of eight years from 2008 August to 2016 December with the Fermi -LAT. We have identified several flares of this highly variable source, studied their temporal and spectral properties in detail, and compared with previous works on flares of PKS 1510-089. Five major flares and a few subflares or substructures have been identified in our study. The fastest variability time is found to be 1.30 ± 0.18 hr between MJD 55852.063 and 55852.188, where we estimate the minimum size of the emission region to bemore » 4.85 × 10{sup 15} cm. In most of the flares, the spectral energy distributions are better fitted with a log-parabolic distribution compared to a simple power law or a power law with exponential cutoffs. This has strong physics implications regarding the nature of the high-energy gamma-ray emission region.« less
Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.
Zagorecki, Adam; Łupińska-Dubicka, Anna; Voortman, Mark; Druzdzel, Marek J
2016-03-01
A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.
A self-similar hierarchy of the Korean stock market
NASA Astrophysics Data System (ADS)
Lim, Gyuchang; Min, Seungsik; Yoo, Kun-Woo
2013-01-01
A scaling analysis is performed on market values of stocks listed on Korean stock exchanges such as the KOSPI and the KOSDAQ. Different from previous studies on price fluctuations, market capitalizations are dealt with in this work. First, we show that the sum of the two stock exchanges shows a clear rank-size distribution, i.e., the Zipf's law, just as each separate one does. Second, by abstracting Zipf's law as a γ-sequence, we define a self-similar hierarchy consisting of many levels, with the numbers of firms at each level forming a geometric sequence. We also use two exponential functions to describe the hierarchy and derive a scaling law from them. Lastly, we propose a self-similar hierarchical process and perform an empirical analysis on our data set. Based on our findings, we argue that all money invested in the stock market is distributed in a hierarchical way and that a slight difference exists between the two exchanges.
NASA Astrophysics Data System (ADS)
Carvalho, Russell; Wei, Chih-Lin; Rowe, Gilbert; Schulze, Anja
2013-10-01
Patterns of taxonomic and functional diversity in polychaete assemblages were examined in the deep northern Gulf of Mexico, including the first analysis of polychaete feeding guild distribution. An analysis of samples from a total of 51 stations located along 7 transects plus additional nearby sites, indicated that density decreased exponentially with depth, with the central locations having higher densities than the eastern and western regions. Alpha diversity was also highest at the central stations associated with the Mississippi trough. The samples can be grouped into three significant clusters based on thirty percent similarity of species composition. BIO-ENV indicated depth, sediment particle size, and export POC were most important variables explaining distributions. The diversity of polychaete feeding guilds was high in the Mississippi trough, upper and mid-slope regions but declined to a few guilds on the Sigsbee abyssal plain. Combining feeding guild analysis with traditional analysis of species diversity contributes to a clearer understanding of trophic diversity in deep-sea benthic assemblages.
Resource acquisition, distribution and end-use efficiencies and the growth of industrial society
NASA Astrophysics Data System (ADS)
Jarvis, A.; Jarvis, S.; Hewitt, N.
2015-01-01
A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end use. With respect to energy, growth has been near exponential for the last 160 years. We attempt to show that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near optimal directed networks. If so, the distribution efficiencies of these networks must decline as they expand due to path lengths becoming longer and more tortuous. To maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system: namely at the points of acquisition and end use. We postulate that the maintenance of growth at the specific rate of ~2.4% yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.
Plume characteristics of MPD thrusters: A preliminary examination
NASA Technical Reports Server (NTRS)
Myers, Roger M.
1989-01-01
A diagnostics facility for MPD thruster plume measurements was built and is currently undergoing testing. The facility includes electrostatic probes for electron temperature and density measurements, Hall probes for magnetic field and current distribution mapping, and an imaging system to establish the global distribution of plasma species. Preliminary results for MPD thrusters operated at power levels between 30 and 60 kW with solenoidal applied magnetic fields show that the electron density decreases exponentially from 1x10(2) to 2x10(18)/cu m over the first 30 cm of the expansion, while the electron temperature distribution is relatively uniform, decreasing from approximately 2.5 eV to 1.5 eV over the same distance. The radiant intensity of the ArII 4879 A line emission also decays exponentially. Current distribution measurements indicate that a significant fraction of the discharge current is blown into the plume region, and that its distribution depends on the magnitudes of both the discharge current and the applied magnetic field.
Chernov, V; Paz-Moreno, F; Piters, T M; Barboza-Flores, M
2006-01-01
The paper presents the first results of an investigation on optical absorption (OA), thermally and infrared stimulated luminescence (TL and IRSL) of the Pinacate plagioclase (labradorite). The OA spectra reveal two bands with maxima at 1.0 and 3.2 eV connected with absorption of the Fe3+ and Fe2+ and IR absorption at wavelengths longer than 2700 nm. The ultraviolet absorption varies exponentially with the photon energy following the 'vitreous' empirical Urbach rule indicating exponential distribution of localised states in the forbidden band. The natural TL is peaked at 700 K. Laboratory beta irradiation creates a very broad TL peak with maximum at 430 K. The change of the 430 K TL peak shape under the thermal cleaning procedure and dark storage after irradiation reveals a monotonous increasing of the activation energy that can be explained by the exponential distribution of traps. The IRSL response is weak and exhibits a typical decay behaviour.
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
NASA Astrophysics Data System (ADS)
Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai
2017-10-01
A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.
Steinhauser, Marco; Hübner, Ronald
2009-10-01
It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Bian, Junjie; Yuan, Zhixiang; Chen, Xiaoliang; Gao, Yuan; Xu, Chaoqun; Shi, Jianyou
2016-01-01
To prepare a mixture of multiple-coated aniracetam nasal polylactic-acid nanoparticles (M-C-PLA-NP) and evaluate its stability preliminarily in vitro and its brain-targeting efficiency in vivo. The solvent diffusion-evaporation combined with magnetic stirring method has been chosen for the entrapment of aniracetam. The M-C-PLA-NP was characterized with respect to its morphology, particle size, size distribution and aniracetam entrapment efficiency. The in vivo distribution was studied in male SD rats after an intranasal administration. In vitro release of M-C-PLA-NP showed two components with an initial rapid release due to the surface-associated drug and followed by a slower exponential release of aniracetam, which was dissolved in the core. The AUC0 → 30 min of M-C-PLA-NP in brain tissues resulted in a 5.19-fold increase compared with aniracetam solution. The ratios of AUC in brain to that in other tissues obtained after nasal application of M-C-PLA-NP were significantly higher than those of aniracetam solution. Therefore, it can be concluded that M-C-PLA-NP demonstrated its potential on increasing the brain-targeting efficiency of drugs and will be used as novel brain-targeting agent for nasal drug delivery.
Model selection for identifying power-law scaling.
Ton, Robert; Daffertshofer, Andreas
2016-08-01
Long-range temporal and spatial correlations have been reported in a remarkable number of studies. In particular power-law scaling in neural activity raised considerable interest. We here provide a straightforward algorithm not only to quantify power-law scaling but to test it against alternatives using (Bayesian) model comparison. Our algorithm builds on the well-established detrended fluctuation analysis (DFA). After removing trends of a signal, we determine its mean squared fluctuations in consecutive intervals. In contrast to DFA we use the values per interval to approximate the distribution of these mean squared fluctuations. This allows for estimating the corresponding log-likelihood as a function of interval size without presuming the fluctuations to be normally distributed, as is the case in conventional DFA. We demonstrate the validity and robustness of our algorithm using a variety of simulated signals, ranging from scale-free fluctuations with known Hurst exponents, via more conventional dynamical systems resembling exponentially correlated fluctuations, to a toy model of neural mass activity. We also illustrate its use for encephalographic signals. We further discuss confounding factors like the finite signal size. Our model comparison provides a proper means to identify power-law scaling including the range over which it is present. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.
Exploiting the Adaptation Dynamics to Predict the Distribution of Beneficial Fitness Effects
2016-01-01
Adaptation of asexual populations is driven by beneficial mutations and therefore the dynamics of this process, besides other factors, depends on the distribution of beneficial fitness effects. It is known that on uncorrelated fitness landscapes, this distribution can only be of three types: truncated, exponential and power law. We performed extensive stochastic simulations to study the adaptation dynamics on rugged fitness landscapes, and identified two quantities that can be used to distinguish the underlying distribution of beneficial fitness effects. The first quantity studied here is the fitness difference between successive mutations that spread in the population, which is found to decrease in the case of truncated distributions, remains nearly a constant for exponentially decaying distributions and increases when the fitness distribution decays as a power law. The second quantity of interest, namely, the rate of change of fitness with time also shows quantitatively different behaviour for different beneficial fitness distributions. The patterns displayed by the two aforementioned quantities are found to hold good for both low and high mutation rates. We discuss how these patterns can be exploited to determine the distribution of beneficial fitness effects in microbial experiments. PMID:26990188
NASA Astrophysics Data System (ADS)
Ying, G.; Sansalone, J.
2010-03-01
SummaryWith respect to hydrologic processes, the impervious pavement interface significantly alters relationships between rainfall and runoff. Commensurate with alteration of hydrologic processes the pavement also facilitates transport and solubility of dry deposition particulate matter (PM) in runoff. This study examines dry depositional flux rates, granulometric modification by runoff transport, as well as generation of total dissolved solids (TDS), alkalinity and conductivity in source area runoff resulting from PM solubility. PM is collected from a paved source area transportation corridor (I-10) in Baton Rouge, Louisiana encompassing 17 dry deposition and 8 runoff events. The mass-based granulometric particle size distribution (PSD) is measured and modeled through a cumulative gamma function, while PM surface area distributions across the PSD follow a log-normal distribution. Dry deposition flux rates are modeled as separate first-order exponential functions of previous dry hours (PDH) for PM and suspended, settleable and sediment fractions. When trans-located from dry deposition into runoff, PSDs are modified, with a d50m decreasing from 331 to 14 μm after transport and 60 min of settling. Solubility experiments as a function of pH, contact time and particle size using source area rainfall generate constitutive models to reproduce pH, alkalinity, TDS and alkalinity for historical events. Equilibrium pH, alkalinity and TDS are strongly influenced by particle size and contact times. The constitutive leaching models are combined with measured PSDs from a series of rainfall-runoff events to demonstrate that the model results replicate alkalinity and TDS in runoff from the subject watershed. Results illustrate the granulometry of dry deposition PM, modification of PSDs along the drainage pathway, and the role of PM solubility for generation of TDS, alkalinity and conductivity in urban source area rainfall-runoff.
Agus, Emily L; Young, David T; Lingard, Justin J N; Smalley, Robert J; Tate, James E; Goodman, Paul S; Tomlin, Alison S
2007-11-01
Measurements of urban particle number concentrations and size distributions in the range 5-1000 nm were taken at elevated (roof-level) and roadside sampling sites on Narborough Road in Leicester, UK, along with simultaneous measurements of traffic, NO(x), CO and 1,3-butadiene concentrations and meteorological parameters. A fitting program was used to determine the characteristics of up to five modal groups present in the particle size distributions. All particle modal concentrations peaked during the morning and evening rush hours. Additional events associated with the smallest mode, that were not observed to be connected to primary emissions, were also present suggesting that this mode consisted of newly formed secondary particles. These events included peaks in concentration which coincided with peaks in solar radiation, and lower concentrations of the larger modes. Investigation into the relationships between traffic flow and occupancy indicated three flow regimes; free-flow, unstable and congested. During free-flow conditions, positive linear relationships existed between traffic flow and particle modal number concentrations. However, during unstable and congested periods, this relationship was shown to break-down. Similar trends were observed for concentrations of the gas phase pollutants NO(x), CO and 1,3-butadiene. Strong linear relationships existed between NO(x), CO, 1,3-butadiene concentrations, nucleation and Aitken mode concentrations at both sampling locations, indicating a local traffic related emission source. At the roadside, both nucleation and Aitken mode are best represented by a decreasing exponential function with wind speed, whereas at the roof-level this relationship only occurred for Aitken mode particles. The differing relationships at the two sampling locations are most likely due to a combination of meteorological factors and distance from the local emission source.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
A non-Gaussian option pricing model based on Kaniadakis exponential deformation
NASA Astrophysics Data System (ADS)
Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara
2017-09-01
A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.
Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.
Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T
2010-03-10
Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
Land cover mapping at sub-pixel scales
NASA Astrophysics Data System (ADS)
Makido, Yasuyo Kato
One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.
Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E
2014-11-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.
A multidimensional stability model for predicting shallow landslide size and shape across landscapes
Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E
2014-01-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data. PMID:26213663
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, G. R.; Uzdensky, D. A.; Cerutti, B.
Using two-dimensional particle-in-cell simulations, we characterize the energy spectra of particles accelerated by relativistic magnetic reconnection (without guide field) in collisionless electron–positron plasmas, for a wide range of upstream magnetizations σ and system sizes L. The particle spectra are well-represented by a power lawmore » $${\\gamma }^{-\\alpha }$$, with a combination of exponential and super-exponential high-energy cutoffs, proportional to σ and L, respectively. As a result, for large L and σ, the power-law index α approaches about 1.2.« less
NASA Astrophysics Data System (ADS)
Clarage, James Braun, II
1990-01-01
Methods have been developed for analyzing the diffuse x-ray scattering in the halos about a crystal's Bragg reflections as a means of determining correlations in atomic displacements in protein crystals. The diffuse intensity distribution for rhombohedral insulin, tetragonal lysozyme, and triclinic lysozyme crystals was best simulated in terms of exponential displacement correlation functions. About 90% of the disorder can be accounted for by internal movements correlated with a decay distance of about 6A; the remaining 10% corresponds to intermolecular movements that decay in a distance the order of size of the protein molecule. The results demonstrate that protein crystals fit into neither the Einstein nor the Debye paradigms for thermally fluctuating crystalline solids. Unlike the Einstein model, there are correlations in the atomic displacements, but these correlations decay more steeply with distance than predicted by the Debye-Waller model for an elastic solid. The observed displacement correlations are liquid -like in the sense that they decay exponentially with the distance between atoms, just as positional correlations in a liquid. This liquid-like disorder is similar to the disorder observed in 2-D crystals of polystyrene latex spheres, and similar systems where repulsive interactions dominate; hence, these colloidal crystals appear to provide a better analogy for the dynamics of protein crystals than perfectly elastic lattices.
Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin
2016-09-20
The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
NASA Technical Reports Server (NTRS)
Liu, J.; Tiwari, Surendra N.
1994-01-01
The two-dimensional spatially elliptic Navier-Stokes equations have been used to investigate the radiative interactions in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The radiative heat transfer term in the energy equation is simulated using the Monte Carlo method (MCM). The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The spectral correlation has been considered in the Monte Carlo formulations. Results obtained demonstrate that the effect of radiation on the flow field is minimal but its effect on the wall heat transfer is significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and the nozzle size on the radiative and conductive wall fluxes.
NASA Astrophysics Data System (ADS)
Sangappa, Asha, S.; Sanjeev, Ganesh; Subramanya, G.; Parameswara, P.; Somashekar, R.
2010-01-01
The present work looks into the microstructural modification in electron irradiated Bombyx mori P31 silk fibers. The irradiation process was performed in air at room temperature using 8 MeV electron accelerator at different doses: 0, 25, 50 and 100 kGy. Irradiation of polymer is used to cross-link or degrade the desired component or to fix the polymer morphology. The changes in microstructural parameters in these natural polymer fibers have been computed using wide angle X-ray scattering (WAXS) data and employing line profile analysis (LPA) using Fourier transform technique of Warren. Exponential, Lognormal and Reinhold functions for the column length distributions have been used for the determination of crystal size, lattice strain and enthalpy parameters.
NASA Astrophysics Data System (ADS)
Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.
The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.
Empirical analysis of individual popularity and activity on an online music service system
NASA Astrophysics Data System (ADS)
Hu, Hai-Bo; Han, Ding-Yi
2008-10-01
Quantitative understanding of human behaviors supplies basic comprehension of the dynamics of many socio-economic systems. Based on the log data of an online music service system, we investigate the statistical characteristics of individual activity and popularity, and find that the distributions of both of them follow a stretched exponential form which interpolates between exponential and power law distribution. We also study the human dynamics on the online system and find that the distribution of interevent time between two consecutive listenings of music shows the fat tail feature. Besides, with the reduction of user activity the fat tail becomes more and more irregular, indicating different behavior patterns for users with diverse activities. The research results may shed some light on the in-depth understanding of collective behaviors in socio-economic systems.
NASA Astrophysics Data System (ADS)
George, Freya; Gaidies, Fred
2016-04-01
Analysis of porphyroblast distribution in metamorphic rocks yields insight into the processes controlling metamorphic reaction rates. By coupling this textural record with microprobe analysis and phase-equilibria and diffusion modelling, a detailed view of the nucleation and growth history of metamorphic minerals can be obtained. In this study, we comprehensively characterise the 3D distribution and compositional variation of a garnet population in a garnet-grade pelitic schist of the Lesser Himalayan Sequence (Sikkim), in order to investigate both the rates and kinetic controls of porphyroblastic crystallisation. Quantification of the size, shape and spatial distribution of garnet using high-resolution μ-computed X-ray tomography and statistical analysis reveals a log-normal crystal size distribution, systematic variation of aspect ratio with crystal size, and a significantly clustered garnet texture in the study sample. The latter is indicative of interface-controlled nucleation and growth, with nucleation sites controlled principally by a heterogeneous precursor assemblage. At length-scales less than 0.7 mm, there is evidence for adjacent grains that are on average smaller than the mean size of the population; this minor ordering is attributed to secondary redistribution of porphyroblast centers and reduction of crystal sizes due to syn-kinematic growth and resorption, respectively. Geochemical traverses through centrally sectioned garnet crystals of variable size highlight several features: (1) core compositions of even the smallest crystals preserve primary prograde growth zonation, with little evidence for diffusional modification in any crystal size; (2) rim compositions are within error between grains, suggestive of sample-scale equilibration of the growth medium at the time of cessation of crystallisation; (3) different grains of equal radii display equivalent compositional zoning; and (4) gradients of compositional profiles display a steepening trend in progressively smaller grain sizes, converse to anticipated trends based on classic kinetic crystallisation theory. The observed systematic behaviour is interpreted to reflect interface-controlled rates of crystallisation, with a decrease in the rate of crystal growth of newly nucleated grains as the crystallisation interval proceeds. Numerical simulations of garnet growth successfully reproduce observed core and rim compositions, and simulations of intracrystalline diffusion yield rapid heating/cooling rates along the P-T path, in excess of 100 °C/Ma. Radial garnet crystallisation is correspondingly rapid, with minimum growth rates of 1.5 mm/Ma in the smallest crystals. Simulations suggest progressive nucleation of new generations of garnet occurred with an exponentially decreasing frequency along the prograde path; however, measured gradients indicate that core compositions developed more slowly than predicted by the model, potentially resulting in a more evenly distributed pattern of nucleation.
Mathematical modelling of the growth of human fetus anatomical structures.
Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech
2017-09-01
The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.
Nanotechnology: toxicologic pathology.
Hubbs, Ann F; Sargent, Linda M; Porter, Dale W; Sager, Tina M; Chen, Bean T; Frazer, David G; Castranova, Vincent; Sriram, Krishnan; Nurkiewicz, Timothy R; Reynolds, Steven H; Battelli, Lori A; Schwegler-Berry, Diane; McKinney, Walter; Fluharty, Kara L; Mercer, Robert R
2013-02-01
Nanotechnology involves technology, science, and engineering in dimensions less than 100 nm. A virtually infinite number of potential nanoscale products can be produced from many different molecules and their combinations. The exponentially increasing number of nanoscale products will solve critical needs in engineering, science, and medicine. However, the virtually infinite number of potential nanotechnology products is a challenge for toxicologic pathologists. Because of their size, nanoparticulates can have therapeutic and toxic effects distinct from micron-sized particulates of the same composition. In the nanoscale, distinct intercellular and intracellular translocation pathways may provide a different distribution than that obtained by micron-sized particulates. Nanoparticulates interact with subcellular structures including microtubules, actin filaments, centrosomes, and chromatin; interactions that may be facilitated in the nanoscale. Features that distinguish nanoparticulates from fine particulates include increased surface area per unit mass and quantum effects. In addition, some nanotechnology products, including the fullerenes, have a novel and reactive surface. Augmented microscopic procedures including enhanced dark-field imaging, immunofluorescence, field-emission scanning electron microscopy, transmission electron microscopy, and confocal microscopy are useful when evaluating nanoparticulate toxicologic pathology. Thus, the pathology assessment is facilitated by understanding the unique features at the nanoscale and the tools that can assist in evaluating nanotoxicology studies.
Nanotechnology: Toxicologic Pathology
Hubbs, Ann F.; Sargent, Linda M.; Porter, Dale W.; Sager, Tina M.; Chen, Bean T.; Frazer, David G.; Castranova, Vincent; Sriram, Krishnan; Nurkiewicz, Timothy R.; Reynolds, Steven H.; Battelli, Lori A.; Schwegler-Berry, Diane; McKinney, Walter; Fluharty, Kara L.; Mercer, Robert R.
2015-01-01
Nanotechnology involves technology, science, and engineering in dimensions less than 100 nm. A virtually infinite number of potential nanoscale products can be produced from many different molecules and their combinations. The exponentially increasing number of nanoscale products will solve critical needs in engineering, science, and medicine. However, the virtually infinite number of potential nanotechnology products is a challenge for toxicologic pathologists. Because of their size, nanoparticulates can have therapeutic and toxic effects distinct from micron-sized particulates of the same composition. In the nanoscale, distinct intercellular and intracellular translocation pathways may provide a different distribution than that obtained by micron-sized particulates. Nanoparticulates interact with subcellular structures including microtubules, actin filaments, centrosomes, and chromatin; interactions that may be facilitated in the nanoscale. Features that distinguish nanoparticulates from fine particulates include increased surface area per unit mass and quantum effects. In addition, some nanotechnology products, including the fullerenes, have a novel and reactive surface. Augmented microscopic procedures including enhanced dark-field imaging, immunofluorescence, field-emission scanning electron microscopy, transmission electron microscopy, and confocal microscopy are useful when evaluating nanoparticulate toxicologic pathology. Thus, the pathology assessment is facilitated by understanding the unique features at the nanoscale and the tools that can assist in evaluating nanotoxicology studies. PMID:23389777
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
Jurgens, Bryant; Böhlke, John Karl; Kauffman, Leon J.; Belitz, Kenneth; Esser, Bradley K.
2016-01-01
A partial exponential lumped parameter model (PEM) was derived to determine age distributions and nitrate trends in long-screened production wells. The PEM can simulate age distributions for wells screened over any finite interval of an aquifer that has an exponential distribution of age with depth. The PEM has 3 parameters – the ratio of saturated thickness to the top and bottom of the screen and mean age, but these can be reduced to 1 parameter (mean age) by using well construction information and estimates of the saturated thickness. The PEM was tested with data from 30 production wells in a heterogeneous alluvial fan aquifer in California, USA. Well construction data were used to guide parameterization of a PEM for each well and mean age was calibrated to measured environmental tracer data (3H, 3He, CFC-113, and 14C). Results were compared to age distributions generated for individual wells using advective particle tracking models (PTMs). Age distributions from PTMs were more complex than PEM distributions, but PEMs provided better fits to tracer data, partly because the PTMs did not simulate 14C accurately in wells that captured varying amounts of old groundwater recharged at lower rates prior to groundwater development and irrigation. Nitrate trends were simulated independently of the calibration process and the PEM provided good fits for at least 11 of 24 wells. This work shows that the PEM, and lumped parameter models (LPMs) in general, can often identify critical features of the age distributions in wells that are needed to explain observed tracer data and nonpoint source contaminant trends, even in systems where aquifer heterogeneity and water-use complicate distributions of age. While accurate PTMs are preferable for understanding and predicting aquifer-scale responses to water use and contaminant transport, LPMs can be sensitive to local conditions near individual wells that may be inaccurately represented or missing in an aquifer-scale flow model.
Improving Bed Management at Wright-Patterson Medical Center
1989-09-01
arrival distributions are Poisson, as in Sim2, then interarrival times are distributed exponentially (Budnick, Mcleavey , and Mojena, 1988:770). While... McLeavey , D. and Mojena R., Principles of Operations Research for Management (second edition). Homewood IL: Irwin, 1988. Cannoodt, L. J. and
Directionality theory and the evolution of body size.
Demetrius, L
2000-12-07
Directionality theory, a dynamic theory of evolution that integrates population genetics with demography, is based on the concept of evolutionary entropy, a measure of the variability in the age of reproducing individuals in a population. The main tenets of the theory are three principles relating the response to the ecological constraints a population experiences, with trends in entropy as the population evolves under mutation and natural selection. (i) Stationary size or fluctuations around a stationary size (bounded growth): a unidirectional increase in entropy; (ii) prolonged episodes of exponential growth (unbounded growth), large population size: a unidirectional decrease in entropy; and (iii) prolonged episodes of exponential growth (unbounded growth), small population size: random, non-directional change in entropy. We invoke these principles, together with an allometric relationship between entropy, and the morphometric variable body size, to provide evolutionary explanations of three empirical patterns pertaining to trends in body size, namely (i) Cope's rule, the tendency towards size increase within phyletic lineages; (ii) the island rule, which pertains to changes in body size that occur as species migrate from mainland populations to colonize island habitats; and (iii) Bergmann's rule, the tendency towards size increase with increasing latitude. The observation that these ecotypic patterns can be explained in terms of the directionality principles for entropy underscores the significance of evolutionary entropy as a unifying concept in forging a link between micro-evolution, the dynamics of gene frequency change, and macro-evolution, dynamic changes in morphometric variables.
Compact continuous-variable entanglement distillation.
Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A
2012-02-10
We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.
Geometrical effects on the electron residence time in semiconductor nano-particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koochi, Hakimeh; Ebrahimi, Fatemeh, E-mail: f-ebrahimi@birjand.ac.ir; Solar Energy Research Group, University of Birjand, Birjand
2014-09-07
We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ{sub r} in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r{sup 2} model) or through the whole particle (r{sup 3} model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW)more » simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ{sub r}. It has been observed that by increasing the coordination number n, the average value of electron residence time, τ{sup ¯}{sub r} rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ{sup ¯}{sub r} is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ{sup ¯}{sub r}. Our simulations indicate that for volume distribution of traps, τ{sup ¯}{sub r} scales as d{sup 2}. For a surface distribution of traps τ{sup ¯}{sub r} increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.« less
Shallow cumuli ensemble statistics for development of a stochastic parameterization
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.
Transient hydrodynamic finite-size effects in simulations under periodic boundary conditions
NASA Astrophysics Data System (ADS)
Asta, Adelchi J.; Levesque, Maximilien; Vuilleumier, Rodolphe; Rotenberg, Benjamin
2017-06-01
We use lattice-Boltzmann and analytical calculations to investigate transient hydrodynamic finite-size effects induced by the use of periodic boundary conditions. These effects are inevitable in simulations at the molecular, mesoscopic, or continuum levels of description. We analyze the transient response to a local perturbation in the fluid and obtain the local velocity correlation function via linear response theory. This approach is validated by comparing the finite-size effects on the steady-state velocity with the known results for the diffusion coefficient. We next investigate the full time dependence of the local velocity autocorrelation function. We find at long times a crossover between the expected t-3 /2 hydrodynamic tail and an oscillatory exponential decay, and study the scaling with the system size of the crossover time, exponential rate and amplitude, and oscillation frequency. We interpret these results from the analytic solution of the compressible Navier-Stokes equation for the slowest modes, which are set by the system size. The present work not only provides a comprehensive analysis of hydrodynamic finite-size effects in bulk fluids, which arise regardless of the level of description and simulation algorithm, but also establishes the lattice-Boltzmann method as a suitable tool to investigate such effects in general.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
NASA Astrophysics Data System (ADS)
Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.
2016-08-01
This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
NASA Astrophysics Data System (ADS)
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
Huntsman, Brock M.; Petty, J. Todd
2014-01-01
Spatial population models predict strong density-dependence and relatively stable population dynamics near the core of a species' distribution with increasing variance and importance of density-independent processes operating towards the population periphery. Using a 10-year data set and an information-theoretic approach, we tested a series of candidate models considering density-dependent and density-independent controls on brook trout population dynamics across a core-periphery distribution gradient within a central Appalachian watershed. We sampled seven sub-populations with study sites ranging in drainage area from 1.3–60 km2 and long-term average densities ranging from 0.335–0.006 trout/m. Modeled response variables included per capita population growth rate of young-of-the-year, adult, and total brook trout. We also quantified a stock-recruitment relationship for the headwater population and coefficients of variability in mean trout density for all sub-populations over time. Density-dependent regulation was prevalent throughout the study area regardless of stream size. However, density-independent temperature models carried substantial weight and likely reflect the effect of year-to-year variability in water temperature on trout dispersal between cold tributaries and warm main stems. Estimated adult carrying capacities decreased exponentially with increasing stream size from 0.24 trout/m in headwaters to 0.005 trout/m in the main stem. Finally, temporal variance in brook trout population size was lowest in the high-density headwater population, tended to peak in mid-sized streams and declined slightly in the largest streams with the lowest densities. Our results provide support for the hypothesis that local density-dependent processes have a strong control on brook trout dynamics across the entire distribution gradient. However, the mechanisms of regulation likely shift from competition for limited food and space in headwater streams to competition for thermal refugia in larger main stems. It also is likely that source-sink dynamics and dispersal from small headwater habitats may partially influence brook trout population dynamics in the main stem. PMID:24618602
In Situ Balloon-Borne Ice Particle Imaging in High-Latitude Cirrus
NASA Astrophysics Data System (ADS)
Kuhn, Thomas; Heymsfield, Andrew J.
2016-09-01
Cirrus clouds reflect incoming solar radiation, creating a cooling effect. At the same time, these clouds absorb the infrared radiation from the Earth, creating a greenhouse effect. The net effect, crucial for radiative transfer, depends on the cirrus microphysical properties, such as particle size distributions and particle shapes. Knowledge of these cloud properties is also needed for calibrating and validating passive and active remote sensors. Ice particles of sizes below 100 µm are inherently difficult to measure with aircraft-mounted probes due to issues with resolution, sizing, and size-dependent sampling volume. Furthermore, artefacts are produced by shattering of particles on the leading surfaces of the aircraft probes when particles several hundred microns or larger are present. Here, we report on a series of balloon-borne in situ measurements that were carried out at a high-latitude location, Kiruna in northern Sweden (68N 21E). The method used here avoids these issues experienced with the aircraft probes. Furthermore, with a balloon-borne instrument, data are collected as vertical profiles, more useful for calibrating or evaluating remote sensing measurements than data collected along horizontal traverses. Particles are collected on an oil-coated film at a sampling speed given directly by the ascending rate of the balloon, 4 m s-1. The collecting film is advanced uniformly inside the instrument so that an always unused section of the film is exposed to ice particles, which are measured by imaging shortly after sampling. The high optical resolution of about 4 µm together with a pixel resolution of 1.65 µm allows particle detection at sizes of 10 µm and larger. For particles that are 20 µm (12 pixel) in size or larger, the shape can be recognized. The sampling volume, 130 cm3 s-1, is well defined and independent of particle size. With the encountered number concentrations of between 4 and 400 L-1, this required about 90- to 4-s sampling times to determine particle size distributions of cloud layers. Depending on how ice particles vary through the cloud, several layers per cloud with relatively uniform properties have been analysed. Preliminary results of the balloon campaign, targeting upper tropospheric, cold cirrus clouds, are presented here. Ice particles in these clouds were predominantly very small, with a median size of measured particles of around 50 µm and about 80 % of all particles below 100 µm in size. The properties of the particle size distributions at temperatures between -36 and -67 °C have been studied, as well as particle areas, extinction coefficients, and their shapes (area ratios). Gamma and log-normal distribution functions could be fitted to all measured particle size distributions achieving very good correlation with coefficients R of up to 0.95. Each distribution features one distinct mode. With decreasing temperature, the mode diameter decreases exponentially, whereas the total number concentration increases by two orders of magnitude with decreasing temperature in the same range. The high concentrations at cold temperatures also caused larger extinction coefficients, directly determined from cross-sectional areas of single ice particles, than at warmer temperatures. The mass of particles has been estimated from area and size. Ice water content (IWC) and effective diameters are then determined from the data. IWC did vary only between 1 × 10-3 and 5 × 10-3 g m-3 at temperatures below -40 °C and did not show a clear temperature trend. These measurements are part of an ongoing study.
NASA Astrophysics Data System (ADS)
Ikeda, Nobutoshi
2017-12-01
In network models that take into account growth properties, deletion of old nodes has a serious impact on degree distributions, because old nodes tend to become hub nodes. In this study, we aim to provide a simple explanation for why hubs can exist even in conditions where the number of nodes is stationary due to the deletion of old nodes. We show that an exponential increase in the degree of nodes is a natural consequence of the balance between the deletion and addition of nodes as long as a preferential attachment mechanism holds. As a result, the largest degree is determined by the magnitude relationship between the time scale of the exponential growth of degrees and lifetime of old nodes. The degree distribution exhibits a power-law form ˜ k -γ with exponent γ = 1 when the lifetime of nodes is constant. However, various values of γ can be realized by introducing distributed lifetime of nodes.
Level crossings and excess times due to a superposition of uncorrelated exponential pulses
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-01-01
A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
Using phenomenological models for forecasting the 2015 Ebola challenge.
Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo
2018-03-01
The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tsuji, T.; Nishizaka, N.; Onishi, K.
2017-12-01
Sedimentation processes during explosive volcanic eruptions can be constrained based on detailed analysis of grain-size variation of tephra deposits. Especially, an accurate description of the amount of fine particles has also significant implications for the assessment of specific tephra hazards. Grain size studies for single short-term eruption has advantage to contribute understanding the sedimentation processes because it is simple compared to long-lasting eruption. The 2016 Aso Nakadake eruption, Japan represents an ideal for the study of short-term eruptions thanks to an accurate investigation. Then, we investigate the grain size variation with distance from the vent and sedimentological features of the deposit to discuss the sedimentation processes of the tephra fragments. The eruption provided pyroclastic flow deposit and fallout tephra which distributed NE to ENE direction from the vent. The deposits between 4 and 20 km from vent consist of fine-coated lapilli to coarse ash, ash pellet and mud droplet in ascending degree. The samples are lapilli-bearing within 20 km from vent and those outside of 20 km mainly consist of ash particles. Detailed analyses of individual samples highlight a rapid decay of maximum and mean grain size for the deposit from proximal to distal. The decay trend of maximum grain-size is approximated by three segments of exponential curves with two breaks-in-slope at 10 and 40 km from vent. Most of the sampled deposits are characterized by bimodal grain-size distributions, with the modes of the coarse subpopulation decreasing with distance from vent and those of the fine subpopulation being mostly stable. The fine subpopulation has been interpreted as being mostly associated with size-selective sedimentation processes (e.g., particle aggregation) confirmed by the existence of fine-coated particles, ash pellet and mud droplet. As the fine-coated particles generally have a higher terminal velocity than the individual constituent particles, those could be related with the rapid decrease of maximum grain-size with distance from vent at proximal area. Further detail grain-size analyses and theoretical studies can be contributed to understand the effect of fine ash aggregation on sedimentation processes quantitatively.
Roy, Debjit; Mandal, Saptarshi; De, Chayan K; Kumar, Kaushalendra; Mandal, Prasun K
2018-04-18
CdSe-based core/gradient alloy shell/shell semiconductor quantum dots (CGASS QDs) have been shown to be optically quite superior compared to core-shell QDs. However, very little is known about CGASS QDs at the single particle level. Photoluminescence blinking dynamics of four differently emitting (blue (λem = 510), green (λem = 532), orange (λem = 591), and red (λem = 619)) single CGASS QDs having average sizes <∼7 nm have been probed in our home-built total internal reflection fluorescence (TIRF) microscope. All four samples possess an average ON-fraction of 0.70-0.85, which hints towards nearly suppressed PL blinking in these gradiently alloyed systems. Suppression of blinking has been so far achieved with QDs having sizes greater than 10 nm and mostly emitting in the red region (λem > 600 nm). In this manuscript, we report nearly suppressed PL blinking behaviour of CGASS QDs with average sizes <∼7 nm and emitting in the entire range of the visible spectrum, i.e. from blue to green to orange to red. The probability density distribution of both ON- and OFF-event durations for all of these CGASS QDs could be fitted well with a modified inverse truncated power law with an additional exponential model equation. It has been found that unlike most of the literature reports, the power law exponent for OFF-event durations is greater than the power law exponent for ON-event durations for all four samples. This suggests that relatively large ON-event durations are interrupted by comparatively small OFF-event durations. This in turn is indicative of a suppressed non-radiative Auger recombination process for these CGASS systems. However, in these four different samples the ON-event truncation time varies inversely with the OFF-event truncation time, which hints that both the ON- and OFF-event truncation processes are dictated by some common factor. We have employed 2D joint probability distribution analysis to probe the correlation between the event durations and found that residual memory exists in both the ON- and OFF-event durations. Positively correlated successive ON-ON and OFF-OFF event durations and negatively correlated (anti-correlated) ON-OFF event durations perhaps suggest the involvement of more than one type of trapping process within the blinking framework. The timescale corresponding to the additional exponential term has been assigned to hole trapping for ON-event duration statistics. Similarly, for OFF-event duration statistics, this component suggests hole detrapping. We found that the average duration of the exponential process for the ON-event durations is an order of magnitude higher than that of the OFF-event durations. This indicates that the holes are trapped for a significantly long time. When electron trapping is followed by such a hole trapping, long ON-event durations result. We have observed long ON-event durations, as high as 50 s. The competing charge tunnelling model has been used to account for the observed blinking behaviour in these CGASS QDs. Quite interestingly, the PLQY of all of these differently emitting QDs (an ensemble level property) could be correlated with the truncation time (a property at the single particle level). A respective concomitant increase-decrease of ON-OFF event truncation times with increasing PLQY is also indicative of a varying degree of suppression of the Auger recombination processes in these four different CGASS QDs.
Asquith, William H.
2014-01-01
The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.
A mathematical model for generating bipartite graphs and its application to protein networks
NASA Astrophysics Data System (ADS)
Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.
2009-12-01
Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.
Analysis and modeling of optical crosstalk in InP-based Geiger-mode avalanche photodiode FPAs
NASA Astrophysics Data System (ADS)
Chau, Quan; Jiang, Xudong; Itzler, Mark A.; Entwistle, Mark; Piccione, Brian; Owens, Mark; Slomkowski, Krystyna
2015-05-01
Optical crosstalk is a major factor limiting the performance of Geiger-mode avalanche photodiode (GmAPD) focal plane arrays (FPAs). This is especially true for arrays with increased pixel density and broader spectral operation. We have performed extensive experimental and theoretical investigations on the crosstalk effects in InP-based GmAPD FPAs for both 1.06-μm and 1.55-μm applications. Mechanisms responsible for intrinsic dark counts are Poisson processes, and their inter-arrival time distribution is an exponential function. In FPAs, intrinsic dark counts and cross talk events coexist, and the inter-arrival time distribution deviates from purely exponential behavior. From both experimental data and computer simulations, we show the dependence of this deviation on the crosstalk probability. The spatial characteristics of crosstalk are also demonstrated. From the temporal and spatial distribution of crosstalk, an efficient algorithm to identify and quantify crosstalk is introduced.
Science and Facebook: The same popularity law!
Néda, Zoltán; Varga, Levente; Biró, Tamás S
2017-01-01
The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of "shares" for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4.
Distributed Consensus of Stochastic Delayed Multi-agent Systems Under Asynchronous Switching.
Wu, Xiaotai; Tang, Yang; Cao, Jinde; Zhang, Wenbing
2016-08-01
In this paper, the distributed exponential consensus of stochastic delayed multi-agent systems with nonlinear dynamics is investigated under asynchronous switching. The asynchronous switching considered here is to account for the time of identifying the active modes of multi-agent systems. After receipt of confirmation of mode's switching, the matched controller can be applied, which means that the switching time of the matched controller in each node usually lags behind that of system switching. In order to handle the coexistence of switched signals and stochastic disturbances, a comparison principle of stochastic switched delayed systems is first proved. By means of this extended comparison principle, several easy to verified conditions for the existence of an asynchronously switched distributed controller are derived such that stochastic delayed multi-agent systems with asynchronous switching and nonlinear dynamics can achieve global exponential consensus. Two examples are given to illustrate the effectiveness of the proposed method.
Science and Facebook: The same popularity law!
Varga, Levente; Biró, Tamás S.
2017-01-01
The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of “shares” for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4. PMID:28678796
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shankman, C.; Kavelaars, JJ.; Bannister, M. T.
We measure the absolute magnitude, H, distribution, dN(H) ∝ 10{sup αH}, of the scattering Trans-Neptunian Objects (TNOs) as a proxy for their size-frequency distribution. We show that the H-distribution of the scattering TNOs is not consistent with a single-slope distribution, but must transition around H{sub g} ∼ 9 to either a knee with a shallow slope or to a divot, which is a differential drop followed by second exponential distribution. Our analysis is based on a sample of 22 scattering TNOs drawn from three different TNO surveys—the Canada–France Ecliptic Plane Survey, Alexandersen et al., and the Outer Solar System Origins Survey, all of whichmore » provide well-characterized detection thresholds—combined with a cosmogonic model for the formation of the scattering TNO population. Our measured absolute magnitude distribution result is independent of the choice of cosmogonic model. Based on our analysis, we estimate that the number of scattering TNOs is (2.4–8.3) × 10{sup 5} for H{sub r} < 12. A divot H-distribution is seen in a variety of formation scenarios and may explain several puzzles in Kuiper Belt science. We find that a divot H-distribution simultaneously explains the observed scattering TNO, Neptune Trojan, Plutino, and Centaur H-distributions while simultaneously predicting a large enough scattering TNO population to act as the sole supply of the Jupiter-Family Comets.« less
Exponential Family Functional data analysis via a low-rank model.
Li, Gen; Huang, Jianhua Z; Shen, Haipeng
2018-05-08
In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.
Statistical inference on censored data for targeted clinical trials under enrichment design.
Chen, Chen-Fang; Lin, Jr-Rung; Liu, Jen-Pei
2013-01-01
For the traditional clinical trials, inclusion and exclusion criteria are usually based on some clinical endpoints; the genetic or genomic variability of the trial participants are not totally utilized in the criteria. After completion of the human genome project, the disease targets at the molecular level can be identified and can be utilized for the treatment of diseases. However, the accuracy of diagnostic devices for identification of such molecular targets is usually not perfect. Some of the patients enrolled in targeted clinical trials with a positive result for the molecular target might not have the specific molecular targets. As a result, the treatment effect may be underestimated in the patient population truly with the molecular target. To resolve this issue, under the exponential distribution, we develop inferential procedures for the treatment effects of the targeted drug based on the censored endpoints in the patients truly with the molecular targets. Under an enrichment design, we propose using the expectation-maximization algorithm in conjunction with the bootstrap technique to incorporate the inaccuracy of the diagnostic device for detection of the molecular targets on the inference of the treatment effects. A simulation study was conducted to empirically investigate the performance of the proposed methods. Simulation results demonstrate that under the exponential distribution, the proposed estimator is nearly unbiased with adequate precision, and the confidence interval can provide adequate coverage probability. In addition, the proposed testing procedure can adequately control the size with sufficient power. On the other hand, when the proportional hazard assumption is violated, additional simulation studies show that the type I error rate is not controlled at the nominal level and is an increasing function of the positive predictive value. A numerical example illustrates the proposed procedures. Copyright © 2013 John Wiley & Sons, Ltd.
Quasiprobability behind the out-of-time-ordered correlator
NASA Astrophysics Data System (ADS)
Yunger Halpern, Nicole; Swingle, Brian; Dressel, Justin
2018-04-01
Two topics, evolving rapidly in separate fields, were combined recently: the out-of-time-ordered correlator (OTOC) signals quantum-information scrambling in many-body systems. The Kirkwood-Dirac (KD) quasiprobability represents operators in quantum optics. The OTOC was shown to equal a moment of a summed quasiprobability [Yunger Halpern, Phys. Rev. A 95, 012120 (2017), 10.1103/PhysRevA.95.012120]. That quasiprobability, we argue, is an extension of the KD distribution. We explore the quasiprobability's structure from experimental, numerical, and theoretical perspectives. First, we simplify and analyze Yunger Halpern's weak-measurement and interference protocols for measuring the OTOC and its quasiprobability. We decrease, exponentially in system size, the number of trials required to infer the OTOC from weak measurements. We also construct a circuit for implementing the weak-measurement scheme. Next, we calculate the quasiprobability (after coarse graining) numerically and analytically: we simulate a transverse-field Ising model first. Then, we calculate the quasiprobability averaged over random circuits, which model chaotic dynamics. The quasiprobability, we find, distinguishes chaotic from integrable regimes. We observe nonclassical behaviors: the quasiprobability typically has negative components. It becomes nonreal in some regimes. The onset of scrambling breaks a symmetry that bifurcates the quasiprobability, as in classical-chaos pitchforks. Finally, we present mathematical properties. We define an extended KD quasiprobability that generalizes the KD distribution. The quasiprobability obeys a Bayes-type theorem, for example, that exponentially decreases the memory required to calculate weak values, in certain cases. A time-ordered correlator analogous to the OTOC, insensitive to quantum-information scrambling, depends on a quasiprobability closer to a classical probability. This work not only illuminates the OTOC's underpinnings, but also generalizes quasiprobability theory and motivates immediate-future weak-measurement challenges.
Quantum Support Vector Machine for Big Data Classification
NASA Astrophysics Data System (ADS)
Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth
2014-09-01
Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix.
Adsorption of finite semiflexible polymers and their loop and tail distributions
NASA Astrophysics Data System (ADS)
Kampmann, Tobias A.; Kierfeld, Jan
2017-07-01
We discuss the adsorption of semiflexible polymers to a planar attractive wall and focus on the questions of the adsorption threshold for polymers of finite length and their loop and tail distributions using both Monte Carlo simulations and analytical arguments. For the adsorption threshold, we find three regimes: (i) a flexible or Gaussian regime if the persistence length is smaller than the adsorption potential range, (ii) a semiflexible regime if the persistence length is larger than the potential range, and (iii) for finite polymers, a novel crossover to a rigid rod regime if the deflection length exceeds the contour length. In the flexible and semiflexible regimes, finite size corrections arise because the correlation length exceeds the contour length. In the rigid rod regime, however, it is essential how the global orientational or translational degrees of freedom are restricted by grafting or confinement. We discuss finite size corrections for polymers grafted to the adsorbing surface and for polymers confined by a second (parallel) hard wall. Based on these results, we obtain a method to analyze adsorption data for finite semiflexible polymers such as filamentous actin. For the loop and tail distributions, we find power laws with an exponential decay on length scales exceeding the correlation length. We derive and confirm the loop and tail power law exponents for flexible and semiflexible polymers. This allows us to explain that, close to the transition, semiflexible polymers have significantly smaller loops and both flexible and semiflexible polymers desorb by expanding their tail length. The tail distribution allows us to extract the free energy per length of adsorption for actin filaments from experimental data [D. Welch et al., Soft Matter 11, 7507 (2015)].
Chen, Hua
2013-03-01
Tracing back to a specific time T in the past, the genealogy of a sample of haplotypes may not have reached their common ancestor and may leave m lineages extant. For such an incomplete genealogy truncated at a specific time T in the past, the distribution and expectation of the intercoalescence times conditional on T are derived in an exact form in this paper for populations of deterministically time-varying sizes, specifically, for populations growing exponentially. The derived intercoalescence time distribution can be integrated to the coalescent-based joint allele frequency spectrum (JAFS) theory, and is useful for population genetic inference from large-scale genomic data, without relying on computationally intensive approaches, such as importance sampling and Markov Chain Monte Carlo (MCMC) methods. The inference of several important parameters relying on this derived conditional distribution is demonstrated: quantifying population growth rate and onset time, and estimating the number of ancestral lineages at a specific ancient time. Simulation studies confirm validity of the derivation and statistical efficiency of the methods using the derived intercoalescence time distribution. Two examples of real data are given to show the inference of the population growth rate of a European sample from the NIEHS Environmental Genome Project, and the number of ancient lineages of 31 mitochondrial genomes from Tibetan populations. © 2013 Blackwell Publishing Ltd/University College London.
Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.
2018-01-01
Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184
NASA Astrophysics Data System (ADS)
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
NASA Astrophysics Data System (ADS)
Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.
2018-03-01
This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.
Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar
2018-02-01
This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samir, U.; Kaufman, Y.J.; Brace, L.H.
Measurements of electron temperature, satellite potential, ion density and ion composition from the cylindrical electrostatic probe and the Bennett ion mass spectrometer on board the AE-C satellite were used to investigate the influence of the body size parameter R/sub D/=R/sub 0//lambda/sub D/ (where R/sub 0/ is the satellite radius and lambda/sub D/ is the ambient Debye length) on ion distribution in the very near wake. The investigation focused on (O/sup +/) plasmas. It was found that the ratio (..beta..) of density in the wake to ambient density varies with R/sub D/ and that the variation can be described by amore » simple exponential relationship of the form ..beta..=a/sub 0/ exp (a/sub 1/R/sub D/) for 37< or =R/sub D/< or =247 and a/sub 0/=0.006, a/sub 1/=-0.009. the present study extends that of Samir et al. (1979a).« less
Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2012-09-01
In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2009-09-15
Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less
Two-state Markov-chain Poisson nature of individual cellphone call statistics
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Zhou, Wei-Xing; Sornette, Didier
2016-07-01
Unfolding the burst patterns in human activities and social interactions is a very important issue especially for understanding the spreading of disease and information and the formation of groups and organizations. Here, we conduct an in-depth study of the temporal patterns of cellphone conversation activities of 73 339 anonymous cellphone users, whose inter-call durations are Weibull distributed. We find that the individual call events exhibit a pattern of bursts, that high activity periods are alternated with low activity periods. In both periods, the number of calls are exponentially distributed for individuals, but power-law distributed for the population. Together with the exponential distributions of inter-call durations within bursts and of the intervals between consecutive bursts, we demonstrate that the individual call activities are driven by two independent Poisson processes, which can be combined within a minimal model in terms of a two-state first-order Markov chain, giving significant fits for nearly half of the individuals. By measuring directly the distributions of call rates across the population, which exhibit power-law tails, we purport the existence of power-law distributions, via the ‘superposition of distributions’ mechanism. Our findings shed light on the origins of bursty patterns in other human activities.
Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs
NASA Astrophysics Data System (ADS)
Lanchier, Nicolas
2017-04-01
Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.
Heterogeneous Link Weight Promotes the Cooperation in Spatial Prisoner's Dilemma
NASA Astrophysics Data System (ADS)
Ma, Zhi-Qin; Xia, Cheng-Yi; Sun, Shi-Wen; Wang, Li; Wang, Huai-Bin; Wang, Juan
The spatial structure has often been identified as a prominent mechanism that substantially promotes the cooperation level in prisoner's dilemma game. In this paper we introduce a weighting mechanism into the spatial prisoner's dilemma game to explore the cooperative behaviors on the square lattice. Here, three types of weight distributions: exponential, power-law and uniform distributions are considered, and the weight is assigned to links between players. Through large-scale numerical simulations we find, compared with the traditional spatial game, that this mechanism can largely enhance the frequency of cooperators. For most ranges of b, we find that the power-law distribution enables the highest promotion of cooperation and the uniform one leads to the lowest enhancement, whereas the exponential one lies often between them. The great improvement of cooperation can be caused by the fact that the distributional link weight yields inhomogeneous interaction strength among individuals, which can facilitate the formation of cooperative clusters to resist the defector's invasion. In addition, the impact of amplitude of the undulation of weight distribution and noise strength on cooperation is also investigated for three kinds of weight distribution. Current researches can aid in the further understanding of evolutionary cooperation in biological and social science.
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank
2018-02-12
Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.
Homoepitaxial nonpolar (10-10) ZnO/ZnMgO monolithic microcavities: Towards reduced photonic disorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuniga-Perez, J., E-mail: jzp@crhea.cnrs.fr; Kappei, L.; Deparis, C.
2016-06-20
Nonpolar ZnO/ZnMgO-based optical microcavities have been grown on (10-10) m-plane ZnO substrates by plasma-assisted molecular beam epitaxy. Reflectivity measurements indicate an exponential increase of the cavity quality factor with the number of layers in the distributed Bragg reflectors. Most importantly, microreflectivity spectra recorded with a spot size in the order of 2 μm show a negligible photonic disorder (well below 1 meV), leading to local quality factors equivalent to those obtained by macroreflectivity. The anisotropic character of the nonpolar heterostructures manifests itself both in the surface features, elongated parallel to the in-plane c direction, and in the optical spectra, with twomore » cavity modes being observed at different energies for orthogonal polarizations.« less
NASA Technical Reports Server (NTRS)
Ovchinnikov, Mikhail; Ackerman, Andrew S.; Avramov, Alexander; Cheng, Anning; Fan, Jiwen; Fridlind, Ann M.; Ghan, Steven; Harrington, Jerry; Hoose, Corinna; Korolev, Alexei;
2014-01-01
Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP), in agreement with earlier studies. In contrast to previous intercomparison studies, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSDs) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case. Sensitivity tests indicate LWP and IWP are much closer to the bin model simulations when a modified shape factor which is similar to that predicted by bin model simulation is used in bulk scheme. These results demonstrate the importance of representation of ice PSD in determining the partitioning of liquid and ice and the longevity of mixed-phase clouds.
NASA Astrophysics Data System (ADS)
Abdelsalam, A.; El–Nagdy, M. S.; Badawy, B. M.; Osman, W.; Fayed, M.
2016-06-01
The grey particle production following 60 A and 200A GeV 16O interactions with emulsion nuclei is investigated at different centralities. The evaporated target fragment multiplicity is voted as a centrality parameter. The target size effect is examined over a wide range, where the C, N and O nuclei present the light target group while the Br and Ag nuclei are the heavy group. In the framework of the nuclear limiting fragmentation hypothesis, the grey particle multiplicity characteristics depend only on the target size and centrality while the projectile size and energy are not effective. The grey particle is suggested to be a multisource production system. The emission direction in the 4π space depends upon the production source. Either the exponential decay or the Poisson’s peaking curves are the usual characteristic shapes of the grey particle multiplicity distributions. The decay shape is suggested to be a characteristic feature of the source singularity while the peaking shape is a multisource super-position. The sensibility to the centrality varies from a source to other. The distribution shape is identified at each centrality region according to the associated source contribution. In general, the multiplicity characteristics seem to be limited w.r.t. the collision system centrality using light target nuclei. The selection of the black particle multiplicity as a centrality parameter is successful through the collision with the heavy target nuclei. In the collision with the light target nuclei it may be qualitatively better to vote another centrality parameter.
Filtration and clogging of permeable pavement loaded by urban drainage.
Sansalone, J; Kuang, X; Ying, G; Ranieri, V
2012-12-15
Permeable pavement, as a sustainable infrastructure material can promote hydrologic restoration, particulate matter (PM) and solute control. However, filtration and commensurate clogging are two aspects of continued interest and discussion. This study quantifies filtration and clogging of cementitious permeable pavement (CPP) for loadings from 50 to 200 mg/L of hetero-disperse sandy-silt PM. The CPP mix design provides a hetero-disperse pore size distribution (PSD)(pore), effective porosity (φ(e)) of 24% and median pore size of 658 μm with a standard deviation of 457 μm. The PM mass separation across the entire particle size distribution (PSD)(PM) exceeds 80%; with complete separation for PM greater than 300 μm and 50% separation for suspended PM. Turbidity is reduced (42-95%), and effluent is below 10 NTU in the first quartile of a loading period. Permeable pavement illustrates reductions in initial (clean-bed) hydraulic conductivity (k(0)) with loading time. For all PM loadings, k(0) (3.1 × 10(-1) mm/s) was reduced to 10(-4) mm/s for runoff loading durations from 100 to 250 h, respectively. Temporal hydraulic conductivity (k) follows exponential profiles. Maintenance by vacuuming and sonication illustrate that 96-99% of k(0) is recovered. Permeable pavement constitutive properties integrated with measured PM loads and a year of continuous rainfall-runoff simulation illustrate k reduction with historical loadings. Study results measure and model filtration and hydraulic conductivity phenomena as well as maintenance requirements of permeable pavement directly loaded by urban drainage. Copyright © 2011. Published by Elsevier Ltd.
System Lifetimes, The Memoryless Property, Euler's Constant, and Pi
ERIC Educational Resources Information Center
Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon
2013-01-01
A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…
Universal rule for the symmetric division of plant cells
Besson, Sébastien; Dumais, Jacques
2011-01-01
The division of eukaryotic cells involves the assembly of complex cytoskeletal structures to exert the forces required for chromosome segregation and cytokinesis. In plants, empirical evidence suggests that tensional forces within the cytoskeleton cause cells to divide along the plane that minimizes the surface area of the cell plate (Errera’s rule) while creating daughter cells of equal size. However, exceptions to Errera’s rule cast doubt on whether a broadly applicable rule can be formulated for plant cell division. Here, we show that the selection of the plane of division involves a competition between alternative configurations whose geometries represent local area minima. We find that the probability of observing a particular division configuration increases inversely with its relative area according to an exponential probability distribution known as the Gibbs measure. Moreover, a comparison across land plants and their most recent algal ancestors confirms that the probability distribution is widely conserved and independent of cell shape and size. Using a maximum entropy formulation, we show that this empirical division rule is predicted by the dynamics of the tense cytoskeletal elements that lead to the positioning of the preprophase band. Based on the fact that the division plane is selected from the sole interaction of the cytoskeleton with cell shape, we posit that the new rule represents the default mechanism for plant cell division when internal or external cues are absent. PMID:21383128
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.
NASA Astrophysics Data System (ADS)
Zhang, F. H.; Zhou, G. D.; Ma, K.; Ma, W. J.; Cui, W. Y.; Zhang, B.
2015-11-01
Present studies have shown that, in the main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the distributions of neutron exposures in the nucleosynthesis regions can all be expressed by an exponential function ({ρ_{AGB}}(τ) = C/{τ_0}exp ( - τ/{τ_0})) in the effective range of values. However, the specific expressions of the proportional coefficient C and the mean neutron exposure ({τ_0}) in the formula for different models are not completely determined in the related literatures. Through dissecting the basic solving method of the exponential distribution of neutron exposures, and systematically combing the solution procedure of exposure distribution for different stellar models, the general calculating formulas as well as their auxiliary equations for calculating C and ({τ_0}) are reduced. Given the discrete distribution of neutron exposures ({P_k}), i.e. the mass ratio of the materials which have exposed to neutrons for (k) ((k = 0, 1, 2 \\cdots )) times when reaching the final distribution with respect to the materials of the He intershell, (C = - {P_1}/ln R), and ({τ_0} = - Δ τ /ln R) can be obtained. Here, (R) expresses the probability that the materials can successively experience neutron irradiation for two times in the He intershell. For the convective nucleosynthesis model (including the Ulrich model and the ({}^{13}{C})-pocket convective burning model), (R) is just the overlap factor r, namely the mass ratio of the materials which can undergo two successive thermal pulses in the He intershell. And for the (^{13}{C})-pocket radiative burning model, (R = sumlimits_{k = 1}^∞ {{P_k}} ). This set of formulas practically give the corresponding relationship between C or ({τ_0}) and the model parameters. The results of this study effectively solve the problem of analytically calculating the distribution of neutron exposures in the low-mass AGB star s-process nucleosynthesis model of (^{13}{C})-pocket radiative burning.
NASA Astrophysics Data System (ADS)
Dalkilic, Turkan Erbay; Apaydin, Aysen
2009-11-01
In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.
Sub-surface mechanical damage distributions during grinding of fused silica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suratwala, T I; Wong, L L; Miller, P E
2005-11-28
The distribution and characteristics of surface cracking (i.e. sub-surface damage or SSD) formed during standard grinding processes has been investigated on fused silica glass. The SSD distributions of the ground surfaces were determined by: (1) creating a shallow (18-108 {micro}m) wedge/taper on the surface by magneto-rheological finishing; (2) exposing the SSD by HF acid etching; and (3) performing image analysis of the observed cracks from optical micrographs taken along the surface taper. The observed surface cracks are characterized as near-surface lateral and deeper trailing indent type fractures (i.e., chatter marks). The SSD depth distributions are typically described by a singlemore » exponential distribution followed by an asymptotic cutoff in depth (c{sub max}). The length of the trailing indent is strongly correlated with a given process. Using established fracture indentation relationships, it is shown that only a small fraction of the abrasive particles are being mechanically loaded and causing fracture, and it is likely the larger particles in the abrasive particle size distribution that bear the higher loads. The SSD depth was observed to increase with load and with a small amount of larger contaminant particles. Using a simple brittle fracture model for grinding, the SSD depth distribution has been related to the SSD length distribution to gain insight into ''effective'' size distribution of particles participating in the fracture. Both the average crack length and the surface roughness were found to scale linearly with the maximum SSD depth (c{sub max}). These relationships can serve as useful rules-of-thumb for nondestructively estimating SSD depth and to identify the process that caused the SSD. In certain applications such as high intensity lasers, SSD on the glass optics can serve as a reservoir for minute amounts of impurities that absorb the high intensity laser light and lead to subsequent laser-induced surface damage. Hence a more scientific understanding of SSD formation can provide a means to establish recipes to fabricate SSD-free, laser damage resistant optical surfaces.« less
NASA Technical Reports Server (NTRS)
Zak, Michail; Williams, Colin P.
1997-01-01
The capacity of classical neurocomputers is limited by the number of classical degrees of freedom which is roughly proportional to the size of the computer. By Contrast, a Hypothetical quantum neurocomputer can implement an exponentially large number of the degrees of freedom within the same size. In this paper an attempt is made to reconcile linear reversible structure of quantum evolution with nonlinear irreversible dynamics for neural nets.
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Samei, Ehsan
2012-03-01
The purpose of this work was twofold: (a) to estimate patient- and cohort-specific radiation dose and cancer risk index for abdominopelvic computer tomography (CT) scans; (b) to evaluate the effects of patient anatomical characteristics (size, age, and gender) and CT scanner model on dose and risk conversion coefficients. The study included 100 patient models (42 pediatric models, 58 adult models) and multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which DLP-normalized-effective dose (k factor) and DLP-normalized-risk index values (q factor) were derived. The k factor showed exponential decrease with increasing patient size. For a given gender, q factor showed exponential decrease with both increasing patient size and patient age. The discrepancies in k and q factors across scanners were on average 8% and 15%, respectively. This study demonstrates the feasibility of estimating patient-specific organ dose and cohort-specific effective dose and risk index in abdominopelvic CT requiring only the knowledge of patient size, gender, and age.
NASA Astrophysics Data System (ADS)
Mascaro, Giuseppe
2018-04-01
This study uses daily rainfall records of a dense network of 240 gauges in central Arizona to gain insights on (i) the variability of the seasonal distributions of rainfall extremes; (ii) how the seasonal distributions affect the shape of the annual distribution; and (iii) the presence of spatial patterns and orographic control for these distributions. For this aim, recent methodological advancements in peak-over-threshold analysis and application of the Generalized Pareto Distribution (GPD) were used to assess the suitability of the GPD hypothesis and improve the estimation of its parameters, while limiting the effect of short sample sizes. The distribution of daily rainfall extremes was found to be heavy-tailed (i.e., GPD shape parameter ξ > 0) during the summer season, dominated by convective monsoonal thunderstorms. The exponential distribution (a special case of GPD with ξ = 0) was instead showed to be appropriate for modeling wintertime daily rainfall extremes, mainly caused by cold fronts transported by westerly flow. The annual distribution exhibited a mixed behavior, with lighter upper tails than those found in summer. A hybrid model mixing the two seasonal distributions was demonstrated capable of reproducing the annual distribution. Organized spatial patterns, mainly controlled by elevation, were observed for the GPD scale parameter, while ξ did not show any clear control of location or orography. The quantiles returned by the GPD were found to be very similar to those provided by the National Oceanic and Atmospheric Administration (NOAA) Atlas 14, which used the Generalized Extreme Value (GEV) distribution. Results of this work are useful to improve statistical modeling of daily rainfall extremes at high spatial resolution and provide diagnostic tools for assessing the ability of climate models to simulate extreme events.
Development and growth of fruit bodies and crops of the button mushroom, Agaricus bisporus.
Straatsma, Gerben; Sonnenberg, Anton S M; van Griensven, Leo J L D
2013-10-01
We studied the appearance of fruit body primordia, the growth of individual fruit bodies and the development of the consecutive flushes of the crop. Relative growth, measured as cap expansion, was not constant. It started extremely rapidly, and slowed down to an exponential rate with diameter doubling of 1.7 d until fruit bodies showed maturation by veil breaking. Initially many outgrowing primordia were arrested, indicating nutritional competition. After reaching 10 mm diameter, no growth arrest occurred; all growing individuals, whether relatively large or small, showed an exponential increase of both cap diameter and biomass, until veil breaking. Biomass doubled in 0.8 d. Exponential growth indicates the absence of competition. Apparently there exist differential nutritional requirements for early growth and for later, continuing growth. Flushing was studied applying different picking sizes. An ordinary flushing pattern occurred at an immature picking size of 8 mm diameter (picking mushrooms once a day with a diameter above 8 mm). The smallest picking size yielded the highest number of mushrooms picked, confirming the competition and arrested growth of outgrowing primordia: competition seems less if outgrowing primordia are removed early. The flush duration (i.e. between the first and last picking moments) was not affected by picking size. At small picking size, the subsequent flushes were not fully separated in time but overlapped. Within 2 d after picking the first individuals of the first flush, primordia for the second flush started outgrowth. Our work supports the view that the acquisition of nutrients by the mycelium is demand rather than supply driven. For formation and early outgrowth of primordia, indications were found for an alternation of local and global control, at least in the casing layer. All these data combined, we postulate that flushing is the consequence of the depletion of some unknown specific nutrition required by outgrowing primordia. Copyright © 2013 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.
Bodunov, E N; Antonov, Yu A; Simões Gamboa, A L
2017-03-21
The non-exponential room temperature luminescence decay of colloidal quantum dots is often well described by a stretched exponential function. However, the physical meaning of the parameters of the function is not clear in the majority of cases reported in the literature. In this work, the room temperature stretched exponential luminescence decay of colloidal quantum dots is investigated theoretically in an attempt to identify the underlying physical mechanisms associated with the parameters of the function. Three classes of non-radiative transition processes between the excited and ground states of colloidal quantum dots are discussed: long-range resonance energy transfer, multiphonon relaxation, and contact quenching without diffusion. It is shown that multiphonon relaxation cannot explain a stretched exponential functional form of the luminescence decay while such dynamics of relaxation can be understood in terms of long-range resonance energy transfer to acceptors (molecules, quantum dots, or anharmonic molecular vibrations) in the environment of the quantum dots acting as energy-donors or by contact quenching by acceptors (surface traps or molecules) distributed statistically on the surface of the quantum dots. These non-radiative transition processes are assigned to different ranges of the stretching parameter β.
NASA Astrophysics Data System (ADS)
Xin, Jianting; He, Weihua; Chu, Genbai; Gu, Yuqiu
2017-06-01
Dynamic fragmentation of metal under shock pressure is an important issue for both fundamental science and practical applications. And in recent decades, laser provides a promising shock loading technique for investigating the process of dynamic fragmentation under extreme condition application of high strain rate. Our group has performed experimental investigation of dynamic fragmentation under laser shock loading by soft recovery and X-ray radiography at SGC / ó prototype laser facility. The fragments under different loading pressures were recovered by PMP foam and analyzed by X-ray micro-tomography and the improved watershed method. The experiment result showed that the bilinear exponential distribution is more appropriate for representing the fragment size distribution. We also developed X-ray radiography technique. Owing to its inherent advantage over shadowgraph technique, X-ray radiography can potentially determine quantitatively material densities by measuring the X-ray transmission. Our group investigated dynamic process of microjetting by X-ray radiography technique, the recorded radiographic images show clear microjetting from the triangular grooves in the free surface of tin sample.
Scaling Behavior of Firm Growth
NASA Astrophysics Data System (ADS)
Stanley, Michael H. R.; Nunes Amaral, Luis A.; Buldyrev, Sergey V.; Havlin, Shlomo; Leschhorn, Heiko; Maass, Philipp; Salinger, Michael A.; Stanley, H. Eugene
1996-03-01
The theory of the firm is of considerable interest in economics. The standard microeconomic theory of the firm is largely a static model and has thus proved unsatisfactory for addressing inherently dynamic issues such as the growth of economies. In recent years, many have attempted to develop richer models that provide a more accurate representation of firm dynamics due to learning, innovative effort, and the development of organizational infrastructure. The validity of these new, inherently dynamic theories depends on their consistency with the statistical properties of firm growth, e.g. the relationship between growth rates and firm size. Using the Compustat database over the time period 1975-1991, we find: (i) the distribution of annual growth rates for firms with approximately the same sales displays an exponential form with the logarithm of growth rate, and (ii) the fluctuations in the growth rates --- measured by the width of this distribution --- scale as a power law with the firm sales. We place these findings of scaling behavior in the context of conventional economics by considering firm growth dynamics with temporal correlations and also, by considering a hierarchical organization of the departments of a firm.
Hu, L H; Fong, N K; Yang, L Z; Chow, W K; Li, Y Z; Huo, R
2007-02-09
Smoke and toxic gases, such as carbon monoxide, are the most fatal factors in fires. This paper models fire-induced smoke spread and carbon monoxide transportation in an 88m long channel by Fire Dynamics Simulator (FDS) with large eddy simulation (LES). FDS is now a well-founded fire dynamics computational fluid dynamic (CFD) program, which was developed by National Institute of Standards and Technology (NIST). Two full scale experiments with fire sizes of 0.75 and 1.6MW were conducted in this channel to validate the program. The spread of the fire-induced smoke flow together with the smoke temperature distribution along the channel, and the carbon monoxide concentration at an assigned position were measured. The FDS simulation results were compared with experimental data with fairly good agreement demonstrated. The validation work is then extended to numerically study the carbon monoxide concentration distribution, both vertically and longitudinally, in this long channel. Results showed that carbon monoxide concentration increase linearly with the height above the floor and decreases exponentially with the distance away from the fire source.
Distributed synchronization control of complex networks with communication constraints.
Xu, Zhenhua; Zhang, Dan; Song, Hongbo
2016-11-01
This paper is concerned with the distributed synchronization control of complex networks with communication constraints. In this work, the controllers communicate with each other through the wireless network, acting as a controller network. Due to the constrained transmission power, techniques such as the packet size reduction and transmission rate reduction schemes are proposed which could help reduce communication load of the controller network. The packet dropout problem is also considered in the controller design since it is often encountered in networked control systems. We show that the closed-loop system can be modeled as a switched system with uncertainties and random variables. By resorting to the switched system approach and some stochastic system analysis method, a new sufficient condition is firstly proposed such that the exponential synchronization is guaranteed in the mean-square sense. The controller gains are determined by using the well-known cone complementarity linearization (CCL) algorithm. Finally, a simulation study is performed, which demonstrates the effectiveness of the proposed design algorithm. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Weigert, Claudia; Steffler, Fabian; Kurz, Tomas; Shellhammer, Thomas H.; Methner, Frank-Jürgen
2009-01-01
The measurement of yeast's intracellular pH (ICP) is a proven method for determining yeast vitality. Vitality describes the condition or health of viable cells as opposed to viability, which defines living versus dead cells. In contrast to fluorescence photometric measurements, which show only average ICP values of a population, flow cytometry allows the presentation of an ICP distribution. By examining six repeated propagations with three separate growth phases (lag, exponential, and stationary), the ICP method previously established for photometry was transferred successfully to flow cytometry by using the pH-dependent fluorescent probe 5,6-carboxyfluorescein. The correlation between the two methods was good (r2 = 0.898, n = 18). With both methods it is possible to track the course of growth phases. Although photometry did not yield significant differences between exponentially and stationary phases (P = 0.433), ICP via flow cytometry did (P = 0.012). Yeast in an exponential phase has a unimodal ICP distribution, reflective of a homogeneous population; however, yeast in a stationary phase displays a broader ICP distribution, and subpopulations could be defined by using the flow cytometry method. In conclusion, flow cytometry yielded specific evidence of the heterogeneity in vitality of a yeast population as measured via ICP. In contrast to photometry, flow cytometry increases information about the yeast population's vitality via a short measurement, which is suitable for routine analysis. PMID:19581482
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, Andrew M., E-mail: amhern@ucdavis.edu; Seibert, J. Anthony; Boone, John M.
2015-11-15
Purpose: Current dosimetry methods in mammography assume that the breast is comprised of a homogeneous mixture of glandular and adipose tissues. Three-dimensional (3D) dedicated breast CT (bCT) data sets were used previously to assess the complex anatomical structure within the breast, characterizing the statistical distribution of glandular tissue in the breast. The purpose of this work was to investigate the effect of bCT-derived heterogeneous glandular distributions on dosimetry in mammography. Methods: bCT-derived breast diameters, volumes, and 3D fibroglandular distributions were used to design realistic compressed breast models comprised of heterogeneous distributions of glandular tissue. The bCT-derived glandular distributions were fitmore » to biGaussian functions and used as probability density maps to assign the density distributions within compressed breast models. The MCNPX 2.6.0 Monte Carlo code was used to estimate monoenergetic normalized mean glandular dose “DgN(E)” values in mammography geometry. The DgN(E) values were then weighted by typical mammography x-ray spectra to determine polyenergetic DgN (pDgN) coefficients for heterogeneous (pDgN{sub hetero}) and homogeneous (pDgN{sub homo}) cases. The dependence of estimated pDgN values on phantom size, volumetric glandular fraction (VGF), x-ray technique factors, and location of the heterogeneous glandular distributions was investigated. Results: The pDgN{sub hetero} coefficients were on average 35.3% (SD, 4.1) and 24.2% (SD, 3.0) lower than the pDgN{sub homo} coefficients for the Mo–Mo and W–Rh x-ray spectra, respectively, across all phantom sizes and VGFs when the glandular distributions were centered within the breast phantom in the coronal plane. At constant breast size, increasing VGF from 7.3% to 19.1% lead to a reduction in pDgN{sub hetero} relative to pDgN{sub homo} of 23.6%–27.4% for a W–Rh spectrum. Displacement of the glandular distribution, at a distance equal to 10% of the compressed breast width in the superior and inferior directions, resulted in a 37.3% and a −26.6% change in the pDgN{sub hetero} coefficient, respectively, relative to the centered distribution for the Mo–Mo spectrum. Lateral displacement of the glandular distribution, at a distance equal to 10% of the compressed breast width, resulted in a 1.5% change in the pDgN{sub hetero} coefficient relative to the centered distribution for the W–Rh spectrum. Conclusions: Introducing bCT-derived heterogeneous glandular distributions into mammography phantom design resulted in decreased glandular dose relative to the widely used homogeneous assumption. A homogeneous distribution overestimates the amount of glandular tissue near the entrant surface of the breast, where dose deposition is exponentially higher. While these findings are based on clinically measured distributions of glandular tissue using a large cohort of women, future work is required to improve the classification of glandular distributions based on breast size and overall glandular fraction.« less
Initial mass function of planetesimals formed by the streaming instability
NASA Astrophysics Data System (ADS)
Schäfer, Urs; Yang, Chao-Chin; Johansen, Anders
2017-01-01
The streaming instability is a mechanism to concentrate solid particles into overdense filaments that undergo gravitational collapse and form planetesimals. However, it remains unclear how the initial mass function of these planetesimals depends on the box dimensions of numerical simulations. To resolve this, we perform simulations of planetesimal formation with the largest box dimensions to date, allowing planetesimals to form simultaneously in multiple filaments that can only emerge within such large simulation boxes. In our simulations, planetesimals with sizes between 80 km and several hundred kilometers form. We find that a power law with a rather shallow exponential cutoff at the high-mass end represents the cumulative birth mass function better than an integrated power law. The steepness of the exponential cutoff is largely independent of box dimensions and resolution, while the exponent of the power law is not constrained at the resolutions we employ. Moreover, we find that the characteristic mass scale of the exponential cutoff correlates with the mass budget in each filament. Together with previous studies of high-resolution simulations with small box domains, our results therefore imply that the cumulative birth mass function of planetesimals is consistent with an exponentially tapered power law with a power-law exponent of approximately -1.6 and a steepness of the exponential cutoff in the range of 0.3-0.4.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
Accumulated distribution of material gain at dislocation crystal growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakin, V. I., E-mail: rakin@geo.komisc.ru
2016-05-15
A model for slowing down the tangential growth rate of an elementary step at dislocation crystal growth is proposed based on the exponential law of impurity particle distribution over adsorption energy. It is established that the statistical distribution of material gain on structurally equivalent faces obeys the Erlang law. The Erlang distribution is proposed to be used to calculate the occurrence rates of morphological combinatorial types of polyhedra, presenting real simple crystallographic forms.
NASA Astrophysics Data System (ADS)
Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian
2018-03-01
The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.
Macera, Márcia A C; Louzada, Francisco; Cancho, Vicente G; Fontes, Cor J F
2015-03-01
In this paper, we introduce a new model for recurrent event data characterized by a baseline rate function fully parametric, which is based on the exponential-Poisson distribution. The model arises from a latent competing risk scenario, in the sense that there is no information about which cause was responsible for the event occurrence. Then, the time of each recurrence is given by the minimum lifetime value among all latent causes. The new model has a particular case, which is the classical homogeneous Poisson process. The properties of the proposed model are discussed, including its hazard rate function, survival function, and ordinary moments. The inferential procedure is based on the maximum likelihood approach. We consider an important issue of model selection between the proposed model and its particular case by the likelihood ratio test and score test. Goodness of fit of the recurrent event models is assessed using Cox-Snell residuals. A simulation study evaluates the performance of the estimation procedure in the presence of a small and moderate sample sizes. Applications on two real data sets are provided to illustrate the proposed methodology. One of them, first analyzed by our team of researchers, considers the data concerning the recurrence of malaria, which is an infectious disease caused by a protozoan parasite that infects red blood cells. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.