Sample records for double exponential distribution

  1. Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun

    2014-02-01

    We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.

  2. Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment

    NASA Astrophysics Data System (ADS)

    Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok

    2018-05-01

    We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.

  3. Nuclear counting filter based on a centered Skellam test and a double exponential smoothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan

    2015-07-01

    Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activitymore » occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)« less

  4. The mechanism of double-exponential growth in hyper-inflation

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Takayasu, M.; Takayasu, H.

    2002-05-01

    Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.

  5. Wide variation of prostate-specific antigen doubling time of untreated, clinically localized, low-to-intermediate grade, prostate carcinoma.

    PubMed

    Choo, Richard; Klotz, Laurence; Deboer, Gerrit; Danjoux, Cyril; Morton, Gerard C

    2004-08-01

    To assess the prostate specific antigen (PSA) doubling time of untreated, clinically localized, low-to-intermediate grade prostate carcinoma. A prospective single-arm cohort study has been in progress since November 1995 to assess the feasibility of a watchful-observation protocol with selective delayed intervention for clinically localized, low-to-intermediate grade prostate adenocarcinoma. The PSA doubling time was estimated from a linear regression of ln(PSA) against time, assuming a simple exponential growth model. As of March 2003, 231 patients had at least 6 months of follow-up (median 45) and at least three PSA measurements (median 8, range 3-21). The distribution of the doubling time was: < 2 years, 26 patients; 2-5 years, 65; 5-10 years, 42; 10-20 years, 26; 20-50 years, 16; >50 years, 56. The median doubling time was 7.0 years; 42% of men had a doubling time of >10 years. The doubling time of untreated clinically localized, low-to-intermediate grade prostate cancer varies widely.

  6. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  7. Global exponential stability of octonion-valued neural networks with leakage delay and mixed delays.

    PubMed

    Popa, Călin-Adrian

    2018-06-08

    This paper discusses octonion-valued neural networks (OVNNs) with leakage delay, time-varying delays, and distributed delays, for which the states, weights, and activation functions belong to the normed division algebra of octonions. The octonion algebra is a nonassociative and noncommutative generalization of the complex and quaternion algebras, but does not belong to the category of Clifford algebras, which are associative. In order to avoid the nonassociativity of the octonion algebra and also the noncommutativity of the quaternion algebra, the Cayley-Dickson construction is used to decompose the OVNNs into 4 complex-valued systems. By using appropriate Lyapunov-Krasovskii functionals, with double and triple integral terms, the free weighting matrix method, and simple and double integral Jensen inequalities, delay-dependent criteria are established for the exponential stability of the considered OVNNs. The criteria are given in terms of complex-valued linear matrix inequalities, for two types of Lipschitz conditions which are assumed to be satisfied by the octonion-valued activation functions. Finally, two numerical examples illustrate the feasibility, effectiveness, and correctness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. The Secular Evolution Of Disc Galaxies And The Origin Of Exponential And Double Exponential Surface Density Profiles

    NASA Astrophysics Data System (ADS)

    Elmegreen, Bruce G.

    2016-10-01

    Exponential radial profiles are ubiquitous in spiral and dwarf Irregular galaxies, but the origin of this structural form is not understood. This talk will review the observations of exponential and double exponential disks, considering both the light and the mass profiles, and the contributions from stars and gas. Several theories for this structure will also be reviewed, including primordial collapse, bar and spiral torques, clump torques, galaxy interactions, disk viscosity and other internal processes of angular momentum exchange, and stellar scattering off of clumpy structure. The only process currently known that can account for this structure in the most theoretically difficult case is stellar scattering off disks clumps. Stellar orbit models suggest that such scattering can produce exponentials even in isolated dwarf irregulars that have no bars or spirals, little shear or viscosity, and profiles that go out too far for the classical Mestel case of primordial collapse with specific angular momentum conservation.

  9. {phi} meson production in Au + Au and p + p collisions at {radical}s{sub NN}=200 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, J.; Adler, C.; Aggarwal, M.M.

    2004-06-01

    We report the STAR measurement of {psi} meson production in Au + Au and p + p collisions at {radical}s{sub NN} = 200 GeV. Using the event mixing technique, the {psi} spectra and yields are obtained at midrapidity for five centrality bins in Au+Au collisions and for non-singly-diffractive p+p collisions. It is found that the {psi} transverse momentum distributions from Au+Au collisions are better fitted with a single-exponential while the p+p spectrum is better described by a double-exponential distribution. The measured nuclear modification factors indicate that {psi} production in central Au+Au collisions is suppressed relative to peripheral collisions when scaledmore » by the number of binary collisions (). The systematics of versus centrality and the constant {psi}/K{sup -} ratio versus beam species, centrality, and collision energy rule out kaon coalescence as the dominant mechanism for {psi} production.« less

  10. Transfer potentials shape and equilibrate monetary systems

    NASA Astrophysics Data System (ADS)

    Fischer, Robert; Braun, Dieter

    2003-04-01

    We analyze a monetary system of random money transfer on the basis of double entry bookkeeping. Without boundary conditions, we do not reach a price equilibrium and violate text-book formulas of economist's quantity theory ( MV= PQ). To match the resulting quantity of money with the model assumption of a constant price, we have to impose boundary conditions. They either restrict specific transfers globally or impose transfers locally. Both connect through a general framework of transfer potentials. We show that either restricted or imposed transfers can shape Gaussian, tent-shape exponential, Boltzmann-exponential, pareto or periodic equilibrium distributions. We derive the master equation and find its general time-dependent approximate solution. An equivalent of quantity theory for random money transfer under the boundary conditions of transfer potentials is given.

  11. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  12. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  13. Dynamic modeling of sludge compaction and consolidation processes in wastewater secondary settling tanks.

    PubMed

    Abusam, A; Keesman, K J

    2009-01-01

    The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.

  14. Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz

    NASA Astrophysics Data System (ADS)

    Nikitenko, Ya.

    2016-11-01

    To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.

  15. A new look at atmospheric carbon dioxide

    NASA Astrophysics Data System (ADS)

    Hofmann, David J.; Butler, James H.; Tans, Pieter P.

    Carbon dioxide is increasing in the atmosphere and is of considerable concern in global climate change because of its greenhouse gas warming potential. The rate of increase has accelerated since measurements began at Mauna Loa Observatory in 1958 where carbon dioxide increased from less than 1 part per million per year (ppm yr -1) prior to 1970 to more than 2 ppm yr -1 in recent years. Here we show that the anthropogenic component (atmospheric value reduced by the pre-industrial value of 280 ppm) of atmospheric carbon dioxide has been increasing exponentially with a doubling time of about 30 years since the beginning of the industrial revolution (˜1800). Even during the 1970s, when fossil fuel emissions dropped sharply in response to the "oil crisis" of 1973, the anthropogenic atmospheric carbon dioxide level continued increasing exponentially at Mauna Loa Observatory. Since the growth rate (time derivative) of an exponential has the same characteristic lifetime as the function itself, the carbon dioxide growth rate is also doubling at the same rate. This explains the observation that the linear growth rate of carbon dioxide has more than doubled in the past 40 years. The accelerating growth rate is simply the outcome of exponential growth in carbon dioxide with a nearly constant doubling time of about 30 years (about 2%/yr) and appears to have tracked human population since the pre-industrial era.

  16. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    PubMed

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.

  17. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  18. Impact of inhomogeneity on SH-type wave propagation in an initially stressed composite structure

    NASA Astrophysics Data System (ADS)

    Saha, S.; Chattopadhyay, A.; Singh, A. K.

    2018-02-01

    The present analysis has been made on the influence of distinct form of inhomogeneity in a composite structure comprised of double superficial layers lying over a half-space, on the phase velocity of SH-type wave propagating through it. Propagation of SH-type wave in the said structure has been examined in four distinct cases of inhomogeneity viz. when inhomogeneity in double superficial layer is due to exponential variation in density only (Case I); when inhomogeneity in double superficial layers is due to exponential variation in rigidity only (Case II); when inhomogeneity in double superficial layer is due to exponential variation in rigidity, density and initial stress (Case III) and when inhomogeneity in double superficial layer is due to linear variation in rigidity, density and initial stress (Case IV). Closed-form expression of dispersion relation has been accomplished for all four aforementioned cases through extensive application of Debye asymptotic analysis. Deduced dispersion relations for all the cases are found in well-agreement to the classical Love-wave equation. Numerical computation has been carried out to graphically demonstrate the effect of inhomogeneity parameters, initial stress parameters as well as width ratio associated with double superficial layers in the composite structure for each of the four aforesaid cases on dispersion curve. Meticulous examination of distinct cases of inhomogeneity and initial stress in context of considered problem has been carried out with detailed analysis in a comparative approach.

  19. Understanding Exponential Growth: As Simple as a Drop in a Bucket.

    ERIC Educational Resources Information Center

    Goldberg, Fred; Shuman, James

    1984-01-01

    Provides procedures for a simple laboratory activity on exponential growth and its characteristic doubling time. The equipment needed consists of a large plastic bucket, an eyedropper, a stopwatch, an assortment of containers and graduated cylinders, and a supply of water. (JN)

  20. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  1. A Simulation of the ECSS Help Desk with the Erlang a Model

    DTIC Science & Technology

    2011-03-01

    a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au

  2. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  3. Avalanche Analysis from Multielectrode Ensemble Recordings in Cat, Monkey, and Human Cerebral Cortex during Wakefulness and Sleep

    PubMed Central

    Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain

    2012-01-01

    Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053

  4. On the gap between an empirical distribution and an exponential distribution of waiting times for price changes in a financial market

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya

    2007-03-01

    We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.

  5. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  6. Double slip effects of Magnetohydrodynamic (MHD) boundary layer flow over an exponentially stretching sheet with radiation, heat source and chemical reaction

    NASA Astrophysics Data System (ADS)

    Shaharuz Zaman, Azmanira; Aziz, Ahmad Sukri Abd; Ali, Zaileha Md

    2017-09-01

    The double slips effect on the magnetohydrodynamic boundary layer flow over an exponentially stretching sheet with suction/blowing, radiation, chemical reaction and heat source is presented in this analysis. By using the similarity transformation, the governing partial differential equations of momentum, energy and concentration are transformed into the non-linear ordinary equations. These equations are solved using Runge-Kutta-Fehlberg method with shooting technique in MAPLE software environment. The effects of the various parameter on the velocity, temperature and concentration profiles are graphically presented and discussed.

  7. Cross-Conjugated Nanoarchitectures

    DTIC Science & Technology

    2013-08-23

    compounds were further evaluated by Lippert –Mataga analysis of the fluorescence solvatochromism and measurement of quantum yields and fluorescence...1.9 1.1 A(mP)2A Cy 0.49 5.5 0.90 0.93 D(Th)2D Cy 0.008 1.1 0.07 9 A(Th)2A Tol 0.014 2.1f 0.07 4.7 a Calculated from Lippert –Mataga plots for...Δfʹ region of the Lippert –Mataga plot. d Double exponential fit: τ1 = 21.5 ns (73%) and τ2 = 3.7 ns (27%). e Double exponential fit: τ1 = 0.85 ns

  8. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  9. Cyberinfrastructure for the NSF Ocean Observatories Initiative

    NASA Astrophysics Data System (ADS)

    Orcutt, J. A.; Vernon, F. L.; Arrott, M.; Chave, A.; Krueger, I.; Schofield, O.; Glenn, S.; Peach, C.; Nayak, A.

    2007-12-01

    The Internet today is vastly different than the Internet that we knew even five years ago and the changes that will be evident five years from now, when the NSF Ocean Observatories Initiative (OOI) prototype has been installed, are nearly unpredictable. Much of this progress is based on the exponential growth in capabilities of consumer electronics and information technology; the reality of this exponential behavior is rarely appreciated. For example, the number of transistors on a square cm of silicon will continue to double every 18 months, the density of disk storage will double every year, and network bandwidth will double every eight months. Today's desktop 2TB RAID will be 64TB and the 10Gbps Regional Scale Network fiber optical connection will be running at 1.8Tbps. The same exponential behavior characterizes the future of genome sequencing. The first two sequences of composites of individuals' genes cost tens of millions of dollars in 2001. Dr. Craig Venter just published a more accurate complete human genome (his own) at a cost on the order of 100,000. The J. Craig Venter Institute has provided support for the X Prize for Genomics offering 10M to the first successful sequencing of a human genome for $1,000. It's anticipated that the prize will be won within five years. Major advances in technology that are broadly viewed as disruptive or revolutionary rather than evolutionary will often depend upon the exploitation of exponential expansions in capability. Applications of these ideas to the OOI will be discussed. Specifically, the agile ability to scale cyberinfrastructure commensurate with the exponential growth of sensors, networks and computational capability and demand will be described.

  10. Intermittent Lagrangian velocities and accelerations in three-dimensional porous medium flow.

    PubMed

    Holzner, M; Morales, V L; Willmann, M; Dentz, M

    2015-07-01

    Intermittency of Lagrangian velocity and acceleration is a key to understanding transport in complex systems ranging from fluid turbulence to flow in porous media. High-resolution optical particle tracking in a three-dimensional (3D) porous medium provides detailed 3D information on Lagrangian velocities and accelerations. We find sharp transitions close to pore throats, and low flow variability in the pore bodies, which gives rise to stretched exponential Lagrangian velocity and acceleration distributions characterized by a sharp peak at low velocity, superlinear evolution of particle dispersion, and double-peak behavior in the propagators. The velocity distribution is quantified in terms of pore geometry and flow connectivity, which forms the basis for a continuous-time random-walk model that sheds light on the observed Lagrangian flow and transport behaviors.

  11. On designing a new cumulative sum Wilcoxon signed rank chart for monitoring process location

    PubMed Central

    Nazir, Hafiz Zafar; Tahir, Muhammad; Riaz, Muhammad

    2018-01-01

    In this paper, ranked set sampling is used for developing a non-parametric location chart which is developed on the basis of Wilcoxon signed rank statistic. The average run length and some other characteristics of run length are used as the measures to assess the performance of the proposed scheme. Some selective distributions including Laplace (or double exponential), logistic, normal, contaminated normal and student’s t-distributions are considered to examine the performance of the proposed Wilcoxon signed rank control chart. It has been observed that the proposed scheme shows superior shift detection ability than some of the competing counterpart schemes covered in this study. Moreover, the proposed control chart is also implemented and illustrated with a real data set. PMID:29664919

  12. Joint analysis of air pollution in street canyons in St. Petersburg and Copenhagen

    NASA Astrophysics Data System (ADS)

    Genikhovich, E. L.; Ziv, A. D.; Iakovleva, E. A.; Palmgren, F.; Berkowicz, R.

    The bi-annual data set of concentrations of several traffic-related air pollutants, measured continuously in street canyons in St. Petersburg and Copenhagen, is analysed jointly using different statistical techniques. Annual mean concentrations of NO 2, NO x and, especially, benzene are found systematically higher in St. Petersburg than in Copenhagen but for ozone the situation is opposite. In both cities probability distribution functions (PDFs) of concentrations and their daily or weekly extrema are fitted with the Weibull and double exponential distributions, respectively. Sample estimates of bi-variate distributions of concentrations, concentration roses, and probabilities of concentration of one pollutant being extreme given that another one reaches its extremum are presented in this paper as well as auto- and co-spectra. It is demonstrated that there is a reasonably high correlation between seasonally averaged concentrations of pollutants in St. Petersburg and Copenhagen.

  13. A mixing evolution model for bidirectional microblog user networks

    NASA Astrophysics Data System (ADS)

    Yuan, Wei-Guo; Liu, Yun

    2015-08-01

    Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.

  14. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  15. Stability of Tsallis entropy and instabilities of Rényi and normalized Tsallis entropies: a basis for q-exponential distributions.

    PubMed

    Abe, Sumiyoshi

    2002-10-01

    The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.

  16. A Test of the Exponential Distribution for Stand Structure Definition in Uneven-aged Loblolly-Shortleaf Pine Stands

    Treesearch

    Paul A. Murphy; Robert M. Farrar

    1981-01-01

    In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.

  17. Probing the stochastic property of endoreduplication in cell size determination of Arabidopsis thaliana leaf epidermal tissue

    PubMed Central

    2017-01-01

    Cell size distribution is highly reproducible, whereas the size of individual cells often varies greatly within a tissue. This is obvious in a population of Arabidopsis thaliana leaf epidermal cells, which ranged from 1,000 to 10,000 μm2 in size. Endoreduplication is a specialized cell cycle in which nuclear genome size (ploidy) is doubled in the absence of cell division. Although epidermal cells require endoreduplication to enhance cellular expansion, the issue of whether this mechanism is sufficient for explaining cell size distribution remains unclear due to a lack of quantitative understanding linking the occurrence of endoreduplication with cell size diversity. Here, we addressed this question by quantitatively summarizing ploidy profile and cell size distribution using a simple theoretical framework. We first found that endoreduplication dynamics is a Poisson process through cellular maturation. This finding allowed us to construct a mathematical model to predict the time evolution of a ploidy profile with a single rate constant for endoreduplication occurrence in a given time. We reproduced experimentally measured ploidy profile in both wild-type leaf tissue and endoreduplication-related mutants with this analytical solution, further demonstrating the probabilistic property of endoreduplication. We next extended the mathematical model by incorporating the element that cell size is determined according to ploidy level to examine cell size distribution. This analysis revealed that cell size is exponentially enlarged 1.5 times every endoreduplication round. Because this theoretical simulation successfully recapitulated experimentally observed cell size distributions, we concluded that Poissonian endoreduplication dynamics and exponential size-boosting are the sources of the broad cell size distribution in epidermal tissue. More generally, this study contributes to a quantitative understanding whereby stochastic dynamics generate steady-state biological heterogeneity. PMID:28926847

  18. Readout models for BaFBr0.85I0.15:Eu image plates

    NASA Astrophysics Data System (ADS)

    Stoeckl, M.; Solodov, A. A.

    2018-06-01

    The linearity of the photostimulated luminescence process makes repeated image-plate scanning a viable technique to extract a more dynamic range. In order to obtain a response estimate, two semi-empirical models for the readout fading of an image plate are introduced; they relate the depth distribution of activated photostimulated luminescence centers within an image plate to the recorded signal. Model parameters are estimated from image-plate scan series with BAS-MS image plates and the Typhoon FLA 7000 scanner for the hard x-ray image-plate diagnostic over a collection of experiments providing x-ray energy spectra whose approximate shape is a double exponential.

  19. Effect of Coulomb friction on orientational correlation and velocity distribution functions in a sheared dilute granular gas.

    PubMed

    Gayen, Bishakhdatta; Alam, Meheboob

    2011-08-01

    From particle simulations of a sheared frictional granular gas, we show that the Coulomb friction can have dramatic effects on orientational correlation as well as on both the translational and angular velocity distribution functions even in the Boltzmann (dilute) limit. The dependence of orientational correlation on friction coefficient (μ) is found to be nonmonotonic, and the Coulomb friction plays a dual role of enhancing or diminishing the orientational correlation, depending on the value of the tangential restitution coefficient (which characterizes the roughness of particles). From the sticking limit (i.e., with no sliding contact) of rough particles, decreasing the Coulomb friction is found to reduce the density and spatial velocity correlations which, together with diminished orientational correlation for small enough μ, are responsible for the transition from non-gaussian to gaussian distribution functions in the double limit of small friction (μ→0) and nearly elastic particles (e→1). This double limit in fact corresponds to perfectly smooth particles, and hence the maxwellian (gaussian) is indeed a solution of the Boltzmann equation for a frictional granular gas in the limit of elastic collisions and zero Coulomb friction at any roughness. The high-velocity tails of both distribution functions seem to follow stretched exponentials even in the presence of Coulomb friction, and the related velocity exponents deviate strongly from a gaussian with increasing friction.

  20. Individuality and universality in the growth-division laws of single E. coli cells

    NASA Astrophysics Data System (ADS)

    Kennard, Andrew S.; Osella, Matteo; Javer, Avelino; Grilli, Jacopo; Nghe, Philippe; Tans, Sander J.; Cicuta, Pietro; Cosentino Lagomarsino, Marco

    2016-01-01

    The mean size of exponentially dividing Escherichia coli cells in different nutrient conditions is known to depend on the mean growth rate only. However, the joint fluctuations relating cell size, doubling time, and individual growth rate are only starting to be characterized. Recent studies in bacteria reported a universal trend where the spread in both size and doubling times is a linear function of the population means of these variables. Here we combine experiments and theory and use scaling concepts to elucidate the constraints posed by the second observation on the division control mechanism and on the joint fluctuations of sizes and doubling times. We found that scaling relations based on the means collapse both size and doubling-time distributions across different conditions and explain how the shape of their joint fluctuations deviates from the means. Our data on these joint fluctuations highlight the importance of cell individuality: Single cells do not follow the dependence observed for the means between size and either growth rate or inverse doubling time. Our calculations show that these results emerge from a broad class of division control mechanisms requiring a certain scaling form of the "division hazard rate function," which defines the probability rate of dividing as a function of measurable parameters. This "model free" approach gives a rationale for the universal body-size distributions observed in microbial ecosystems across many microbial species, presumably dividing with multiple mechanisms. Additionally, our experiments show a crossover between fast and slow growth in the relation between individual-cell growth rate and division time, which can be understood in terms of different regimes of genome replication control.

  1. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  2. Transformations of the distribution of nuclei formed in a nucleation pulse: Interface-limited growth.

    PubMed

    Shneidman, Vitaly A

    2009-10-28

    A typical nucleation-growth process is considered: a system is quenched into a supersaturated state with a small critical radius r( *) (-) and is allowed to nucleate during a finite time interval t(n), after which the supersaturation is abruptly reduced to a fixed value with a larger critical radius r( *) (+). The size-distribution of nucleated particles f(r,t) further evolves due to their deterministic growth and decay for r larger or smaller than r( *) (+), respectively. A general analytic expressions for f(r,t) is obtained, and it is shown that after a large growth time t this distribution approaches an asymptotic shape determined by two dimensionless parameters, lambda related to t(n), and Lambda=r( *) (+)/r( *) (-). This shape is strongly asymmetric with an exponential and double-exponential cutoffs at small and large sizes, respectively, and with a broad near-flat top in case of a long pulse. Conversely, for a short pulse the distribution acquires a distinct maximum at r=r(max)(t) and approaches a universal shape exp[zeta-e(zeta)], with zeta proportional to r-r(max), independent of the pulse duration. General asymptotic predictions are examined in terms of Zeldovich-Frenkel nucleation model where the entire transient behavior can be described in terms of the Lambert W function. Modifications for the Turnbull-Fisher model are also considered, and analytics is compared with exact numerics. Results are expected to have direct implementations in analysis of two-step annealing crystallization experiments, although other applications might be anticipated due to universality of the nucleation pulse technique.

  3. Does the Australian desert ant Melophorus bagoti approximate a Lévy search by an intrinsic bi-modal walk?

    PubMed

    Reynolds, Andy M; Schultheiss, Patrick; Cheng, Ken

    2014-01-07

    We suggest that the Australian desert ant Melophorus bagoti approximates a Lévy search pattern by using an intrinsic bi-exponential walk and does so when a Lévy search pattern is advantageous. When attempting to locate its nest, M. bagoti adopt a stereotypical search pattern. These searches begin at the location where the ant expects to find the nest, and comprise loops that start and end at this location, and are directed in different azimuthal directions. Loop lengths are exponentially distributed when searches are in visually familiar surroundings and are well described by a mixture of two exponentials when searches are in unfamiliar landscapes. The latter approximates a power-law distribution, the hallmark of a Lévy search. With the aid of a simple analytically tractable theory, we show that an exponential loop-length distribution is advantageous when the distance to the nest can be estimated with some certainty and that a bi-exponential distribution is advantageous when there is considerable uncertainty regarding the nest location. The best bi-exponential search patterns are shown to be those that come closest to approximating advantageous Lévy looping searches. The bi-exponential search patterns of M. bagoti are found to approximate advantageous Lévy search patterns. Copyright © 2013. Published by Elsevier Ltd.

  4. Cell Size Regulation in Bacteria

    NASA Astrophysics Data System (ADS)

    Amir, Ariel

    2014-05-01

    Various bacteria such as the canonical gram negative Escherichia coli or the well-studied gram positive Bacillus subtilis divide symmetrically after they approximately double their volume. Their size at division is not constant, but is typically distributed over a narrow range. Here, we propose an analytically tractable model for cell size control, and calculate the cell size and interdivision time distributions, as well as the correlations between these variables. We suggest ways of extracting the model parameters from experimental data, and show that existing data for E. coli supports partial size control, and a particular explanation: a cell attempts to add a constant volume from the time of initiation of DNA replication to the next initiation event. This hypothesis accounts for the experimentally observed correlations between mother and daughter cells as well as the exponential dependence of size on growth rate.

  5. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  6. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  7. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  8. Essays on the statistical mechanics of the labor market and implications for the distribution of earned income

    NASA Astrophysics Data System (ADS)

    Schneider, Markus P. A.

    This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.

  9. Analytical model of coincidence resolving time in TOF-PET

    NASA Astrophysics Data System (ADS)

    Wieczorek, H.; Thon, A.; Dey, T.; Khanin, V.; Rodnyi, P.

    2016-06-01

    The coincidence resolving time (CRT) of scintillation detectors is the parameter determining noise reduction in time-of-flight PET. We derive an analytical CRT model based on the statistical distribution of photons for two different prototype scintillators. For the first one, characterized by single exponential decay, CRT is proportional to the decay time and inversely proportional to the number of photons, with a square root dependence on the trigger level. For the second scintillator prototype, characterized by exponential rise and decay, CRT is proportional to the square root of the product of rise time and decay time divided by the doubled number of photons, and it is nearly independent of the trigger level. This theory is verified by measurements of scintillation time constants, light yield and CRT on scintillator sticks. Trapping effects are taken into account by defining an effective decay time. We show that in terms of signal-to-noise ratio, CRT is as important as patient dose, imaging time or PET system sensitivity. The noise reduction effect of better timing resolution is verified and visualized by Monte Carlo simulation of a NEMA image quality phantom.

  10. Finite-time containment control of perturbed multi-agent systems based on sliding-mode control

    NASA Astrophysics Data System (ADS)

    Yu, Di; Ji, Xiang Yang

    2018-01-01

    Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.

  11. Characteristics of Double Exponentially Tapered Slot Antenna (DETSA) Conformed in the Longitudinal Direction Around a Cylindrical Structure

    NASA Technical Reports Server (NTRS)

    Ponchak, George E.; Jordan, Jennifer L.; Chevalier, Christine T.

    2006-01-01

    The characteristics of a double exponentially tapered slot antenna (DETSA) as a function of the radius that the DETSA is conformed to in the longitudinal direction is presented. It is shown through measurements and simulations that the radiation pattern of the conformed antenna rotates in the direction through which the antenna is curved, and that diffraction affects the radiation pattern if the radius of curvature is too small or the frequency too high. The gain of the antenna degrades by only 1 dB if the radius of curvature is large and more than 2 dB for smaller radii. The main effect due to curving the antenna is an increased cross-polarization in the E-plane.

  12. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  13. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  14. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  15. A note on large gauge transformations in double field theory

    DOE PAGES

    Naseer, Usman

    2015-06-03

    Here, we give a detailed proof of the conjecture by Hohm and Zwiebach in double field theory. Our result implies that their proposal for large gauge transformations in terms of the Jacobian matrix for coordinate transformations is, as required, equivalent to the standard exponential map associated with the generalized Lie derivative along a suitable parameter.

  16. Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.

    2018-03-01

    We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.

  17. A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems

    NASA Astrophysics Data System (ADS)

    Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.

    2010-09-01

    We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.

  18. Small-Scale, Local Area, and Transitional Millimeter Wave Propagation for 5G Communications

    NASA Astrophysics Data System (ADS)

    Rappaport, Theodore S.; MacCartney, George R.; Sun, Shu; Yan, Hangsong; Deng, Sijia

    2017-12-01

    This paper studies radio propagation mechanisms that impact handoffs, air interface design, beam steering, and MIMO for 5G mobile communication systems. Knife edge diffraction (KED) and a creeping wave linear model are shown to predict diffraction loss around typical building objects from 10 to 26 GHz, and human blockage measurements at 73 GHz are shown to fit a double knife-edge diffraction (DKED) model which incorporates antenna gains. Small-scale spatial fading of millimeter wave received signal voltage amplitude is generally Ricean-distributed for both omnidirectional and directional receive antenna patterns under both line-of-sight (LOS) and non-line-of-sight (NLOS) conditions in most cases, although the log-normal distribution fits measured data better for the omnidirectional receive antenna pattern in the NLOS environment. Small-scale spatial autocorrelations of received voltage amplitudes are shown to fit sinusoidal exponential and exponential functions for LOS and NLOS environments, respectively, with small decorrelation distances of 0.27 cm to 13.6 cm (smaller than the size of a handset) that are favorable for spatial multiplexing. Local area measurements using cluster and route scenarios show how the received signal changes as the mobile moves and transitions from LOS to NLOS locations, with reasonably stationary signal levels within clusters. Wideband mmWave power levels are shown to fade from 0.4 dB/ms to 40 dB/s, depending on travel speed and surroundings.

  19. Quasiclassical treatment of the Auger effect in slow ion-atom collisions

    NASA Astrophysics Data System (ADS)

    Frémont, F.

    2017-09-01

    A quasiclassical model based on the resolution of Hamilton equations of motion is used to get evidence for Auger electron emission following double-electron capture in 150-keV N e10 ++He collisions. Electron-electron interaction is taken into account during the collision by using pure Coulombic potential. To make sure that the helium target is stable before the collision, phenomenological potentials for the electron-nucleus interactions that simulate the Heisenberg principle are included in addition to the Coulombic potential. First, single- and double-electron captures are determined and compared with previous experiments and theories. Then, integration time evolution is calculated for autoionizing and nonautoionizing double capture. In contrast with single capture, the number of electrons originating from autoionization slowly increases with integration time. A fit of the calculated cross sections by means of an exponential function indicates that the average lifetime is 4.4 ×10-3a .u . , in very good agreement with the average lifetime deduced from experiments and a classical model introduced to calculate individual angular momentum distributions. The present calculation demonstrates the ability of classical models to treat the Auger effect, which is a pure quantum effect.

  20. Ammonium Removal from Aqueous Solutions by Clinoptilolite: Determination of Isotherm and Thermodynamic Parameters and Comparison of Kinetics by the Double Exponential Model and Conventional Kinetic Models

    PubMed Central

    Tosun, İsmail

    2012-01-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177

  1. Double closed-loop control of integrated optical resonance gyroscope with mean-square exponential stability.

    PubMed

    Li, Hui; Liu, Liying; Lin, Zhili; Wang, Qiwei; Wang, Xiao; Feng, Lishuang

    2018-01-22

    A new double closed-loop control system with mean-square exponential stability is firstly proposed to optimize the detection accuracy and dynamic response characteristic of the integrated optical resonance gyroscope (IORG). The influence mechanism of optical nonlinear effects on system detection sensitivity is investigated to optimize the demodulation gain, the maximum sensitivity and the linear work region of a gyro system. Especially, we analyze the effect of optical parameter fluctuation on the parameter uncertainty of system, and investigate the influence principle of laser locking-frequency noise on the closed-loop detection accuracy of angular velocity. The stochastic disturbance model of double closed-loop IORG is established that takes the unfavorable factors such as optical effect nonlinearity, disturbed disturbance, optical parameter fluctuation and unavoidable system noise into consideration. A robust control algorithm is also designed to guarantee the mean-square exponential stability of system with a prescribed H ∞ performance in order to improve the detection accuracy and dynamic performance of IORG. The conducted experiment results demonstrate that the IORG has a dynamic response time less than 76us, a long-term bias stability 7.04°/h with an integration time of 10s over one-hour test, and the corresponding bias stability 1.841°/h based on Allan deviation, which validate the effectiveness and usefulness of the proposed detection scheme.

  2. Ammonium removal from aqueous solutions by clinoptilolite: determination of isotherm and thermodynamic parameters and comparison of kinetics by the double exponential model and conventional kinetic models.

    PubMed

    Tosun, Ismail

    2012-03-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  3. Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.

    PubMed

    van Elburg, Ronald A J; van Ooyen, Arjen

    2009-07-01

    An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.

  4. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  5. Persistence of exponential bed thickness distributions in the stratigraphic record: Experiments and theory

    NASA Astrophysics Data System (ADS)

    Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.

    2010-12-01

    Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.

  6. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  7. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    NASA Astrophysics Data System (ADS)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  8. Numerical Calculation of the Spectrum of the Severe (1%) Lighting Current and Its First Derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C G; Ong, M M; Perkins, M P

    2010-02-12

    Recently, the direct-strike lighting environment for the stockpile-to-target sequence was updated [1]. In [1], the severe (1%) lightning current waveforms for first and subsequent return strokes are defined based on Heidler's waveform. This report presents numerical calculations of the spectra of those 1% lightning current waveforms and their first derivatives. First, the 1% lightning current models are repeated here for convenience. Then, the numerical method for calculating the spectra is presented and tested. The test uses a double-exponential waveform and its first derivative, which we fit to the previous 1% direct-strike lighting environment from [2]. Finally, the resulting spectra aremore » given and are compared with those of the double-exponential waveform and its first derivative.« less

  9. Trap density of states in n-channel organic transistors: variable temperature characteristics and band transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Joung-min, E-mail: cho.j.ad@m.titech.ac.jp; Akiyama, Yuto; Kakinuma, Tomoyuki

    2013-10-15

    We have investigated trap density of states (trap DOS) in n-channel organic field-effect transistors based on N,N ’-bis(cyclohexyl)naphthalene diimide (Cy-NDI) and dimethyldicyanoquinonediimine (DMDCNQI). A new method is proposed to extract trap DOS from the Arrhenius plot of the temperature-dependent transconductance. Double exponential trap DOS are observed, in which Cy-NDI has considerable deep states, by contrast, DMDCNQI has substantial tail states. In addition, numerical simulation of the transistor characteristics has been conducted by assuming an exponential trap distribution and the interface approximation. Temperature dependence of transfer characteristics are well reproduced only using several parameters, and the trap DOS obtained from the simulatedmore » characteristics are in good agreement with the assumed trap DOS, indicating that our analysis is self-consistent. Although the experimentally obtained Meyer-Neldel temperature is related to the trap distribution width, the simulation satisfies the Meyer-Neldel rule only very phenomenologically. The simulation also reveals that the subthreshold swing is not always a good indicator of the total trap amount, because it also largely depends on the trap distribution width. Finally, band transport is explored from the simulation having a small number of traps. A crossing point of the transfer curves and negative activation energy above a certain gate voltage are observed in the simulated characteristics, where the critical V{sub G} above which band transport is realized is determined by the sum of the trapped and free charge states below the conduction band edge.« less

  10. Wildfires in Siberian Mountain Forest

    NASA Astrophysics Data System (ADS)

    Kharuk, V.; Ponomarev, E. I.; Antamoshkina, O.

    2017-12-01

    The annual burned area in Russia was estimated as 0.55 to 20 Mha with >70% occurred in Siberia. We analyzed Siberian wildfires distribution with respect to elevation, slope steepness and exposure. In addition, wildfires temporal dynamic and latitudinal range were analyzed. We used daily thermal anomalies derived from NOAA/AVHRR and Terra/MODIS satellites (1990-2016). Fire return intervals were (FRI) calculated based on the dendrochronology analysis of samples taken from trees with burn marks. Spatial distribution of wildfires dependent on topo features: relative burned area increase with elevation increase (ca. 1100 m), switching to following decrease. The wildfires frequency exponentially decreased within lowlands - highlands transition. Burned area is increasing with slope steepness increase (up to 5-10°). Fire return intervals (FRI) on the southfacing slopes are about 30% longer than on the north facing. Wildfire re-occurrence is decreasing exponentially: 90% of burns were caused by single fires, 8.5% by double fires, 1% burned three times, and on about 0.05% territory wildfires occurred four times (observed period: 75 yr.). Wildfires area and number, as well as FRI, also dependent on latitude: relative burned area increasing exponentially in norward direction, whereas relative fire number is exponentially decreasing. FRI increases in the northward direction: from 80 years at 62°N to 200 years at the Arctic Circle, and to 300 years at the northern limit of closed forests ( 71+°N). Fire frequency, fire danger period and FRI are strongly correlated with incoming solar radiation (r = 0.81 - 0.95). In 21-s century, a positive trend of wildfires number and area observed in mountain areas in all Siberia. Thus, burned area and number of fires in Siberia are significantly increased since 1990th (R2 =0.47, R2 =0.69, respectively), and that increase correlated with air temperatures and climate aridity increases. However, wildfires are essential for supporting fire-resistant species (e.g., Larix sibirica, L, dahurica and Pinus silvestris) reforestation and completion with non-fire-resistant species. This work was supported by the Russian Foundation for Basic Research, the Government of the Krasnoyarsk krai, the Krasnoyarsk Fund for Support of Scientific and Technological Activities (N 17-41-240475)

  11. Doubling Time for Nonexponential Families of Functions

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2010-01-01

    One special characteristic of any exponential growth or decay function f(t) = Ab[superscript t] is its unique doubling time or half-life, each of which depends only on the base "b". The half-life is used to characterize the rate of decay of any radioactive substance or the rate at which the level of a medication in the bloodstream decays as it is…

  12. A demographic study of the exponential distribution applied to uneven-aged forests

    Treesearch

    Jeffrey H. Gove

    2016-01-01

    A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...

  13. Exponentiated power Lindley distribution.

    PubMed

    Ashour, Samir K; Eltehiwy, Mahmoud A

    2015-11-01

    A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.

  14. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  15. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  16. New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays.

    PubMed

    Zhang, Guodong; Zeng, Zhigang; Hu, Junhao

    2018-01-01

    This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Dynamic heterogeneity and conditional statistics of non-Gaussian temperature fluctuations in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    He, Xiaozhou; Wang, Yin; Tong, Penger

    2018-05-01

    Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.

  18. A model of canopy photosynthesis incorporating protein distribution through the canopy and its acclimation to light, temperature and CO2

    PubMed Central

    Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce

    2010-01-01

    Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273

  19. Observational constraints on varying neutrino-mass cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.

    We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.

  20. Relationship between Item Responses of Negative Affect Items and the Distribution of the Sum of the Item Scores in the General Population

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Several studies have shown that total depressive symptom scores in the general population approximate an exponential pattern, except for the lower end of the distribution. The Center for Epidemiologic Studies Depression Scale (CES-D) consists of 20 items, each of which may take on four scores: “rarely,” “some,” “occasionally,” and “most of the time.” Recently, we reported that the item responses for 16 negative affect items commonly exhibit exponential patterns, except for the level of “rarely,” leading us to hypothesize that the item responses at the level of “rarely” may be related to the non-exponential pattern typical of the lower end of the distribution. To verify this hypothesis, we investigated how the item responses contribute to the distribution of the sum of the item scores. Methods Data collected from 21,040 subjects who had completed the CES-D questionnaire as part of a Japanese national survey were analyzed. To assess the item responses of negative affect items, we used a parameter r, which denotes the ratio of “rarely” to “some” in each item response. The distributions of the sum of negative affect items in various combinations were analyzed using log-normal scales and curve fitting. Results The sum of the item scores approximated an exponential pattern regardless of the combination of items, whereas, at the lower end of the distributions, there was a clear divergence between the actual data and the predicted exponential pattern. At the lower end of the distributions, the sum of the item scores with high values of r exhibited higher scores compared to those predicted from the exponential pattern, whereas the sum of the item scores with low values of r exhibited lower scores compared to those predicted. Conclusions The distributional pattern of the sum of the item scores could be predicted from the item responses of such items. PMID:27806132

  1. Numerical computation of gravitational field for general axisymmetric objects

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2016-10-01

    We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (I) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (II) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (I) finite uniform objects covering rhombic spindles and circular toroids, (II) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (III) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.

  2. Analytical model for release calculations in solid thin-foils ISOL targets

    NASA Astrophysics Data System (ADS)

    Egoriti, L.; Boeckx, S.; Ghys, L.; Houngbo, D.; Popescu, L.

    2016-10-01

    A detailed analytical model has been developed to simulate isotope-release curves from thin-foils ISOL targets. It involves the separate modeling of diffusion and effusion inside the target. The former has been modeled using both first and second Fick's law. The latter, effusion from the surface of the target material to the end of the ionizer, was simulated with the Monte Carlo code MolFlow+. The calculated delay-time distribution for this process was then fitted using a double-exponential function. The release curve obtained from the convolution of diffusion and effusion shows good agreement with experimental data from two different target geometries used at ISOLDE. Moreover, the experimental yields are well reproduced when combining the release fraction with calculated in-target production.

  3. CleAir Monitoring System for Particulate Matter: A Case in the Napoleonic Museum in Rome

    PubMed Central

    Bonacquisti, Valerio; Di Michele, Marta; Frasca, Francesca; Chianese, Angelo; Siani, Anna Maria

    2017-01-01

    Monitoring the air particulate concentration both outdoors and indoors is becoming a more relevant issue in the past few decades. An innovative, fully automatic, monitoring system called CleAir is presented. Such a system wants to go beyond the traditional technique (gravimetric analysis), allowing for a double monitoring approach: the traditional gravimetric analysis as well as the optical spectroscopic analysis of the scattering on the same filters in steady-state conditions. The experimental data are interpreted in terms of light percolation through highly scattering matter by means of the stretched exponential evolution. CleAir has been applied to investigate the daily distribution of particulate matter within the Napoleonic Museum in Rome as a test case. PMID:28892016

  4. Early stages of Ostwald ripening

    NASA Astrophysics Data System (ADS)

    Shneidman, Vitaly A.

    2013-07-01

    The Becker-Döring (BD) nucleation equation is known to predict a narrow double-exponential front (DEF) in the distribution of growing particles over sizes, which is due to early transient effects. When mass conservation is included, nucleation is eventually exhausted while independent growth is replaced by ripening. Despite the enormous difference in the associated time scales, and the resulting demand on numerics, within the generalized BD model the early DEF is shown to be crucial for the selection of the unique self-similar Lifshitz-Slyozov-Wagner asymptotic regime. Being preserved till the latest stages of growth, the DEF provides a universal part of the initial conditions for the ripening problem, regardless of the mass exchange mechanism between the nucleus and the matrix.

  5. Virtual Observatory and Distributed Data Mining

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.

    2012-03-01

    New modes of discovery are enabled by the growth of data and computational resources (i.e., cyberinfrastructure) in the sciences. This cyberinfrastructure includes structured databases, virtual observatories (distributed data, as described in Section 20.2.1 of this chapter), high-performance computing (petascale machines), distributed computing (e.g., the Grid, the Cloud, and peer-to-peer networks), intelligent search and discovery tools, and innovative visualization environments. Data streams from experiments, sensors, and simulations are increasingly complex and growing in volume. This is true in most sciences, including astronomy, climate simulations, Earth observing systems, remote sensing data collections, and sensor networks. At the same time, we see an emerging confluence of new technologies and approaches to science, most clearly visible in the growing synergism of the four modes of scientific discovery: sensors-modeling-computing-data (Eastman et al. 2005). This has been driven by numerous developments, including the information explosion, development of large-array sensors, acceleration in high-performance computing (HPC) power, advances in algorithms, and efficient modeling techniques. Among these, the most extreme is the growth in new data. Specifically, the acquisition of data in all scientific disciplines is rapidly accelerating and causing a data glut (Bell et al. 2007). It has been estimated that data volumes double every year—for example, the NCSA (National Center for Supercomputing Applications) reported that their users cumulatively generated one petabyte of data over the first 19 years of NCSA operation, but they then generated their next one petabyte in the next year alone, and the data production has been growing by almost 100% each year after that (Butler 2008). The NCSA example is just one of many demonstrations of the exponential (annual data-doubling) growth in scientific data collections. In general, this putative data-doubling is an inevitable result of several compounding factors: the proliferation of data-generating devices, sensors, projects, and enterprises; the 18-month doubling of the digital capacity of these microprocessor-based sensors and devices (commonly referred to as "Moore’s law"); the move to digital for nearly all forms of information; the increase in human-generated data (both unstructured information on the web and structured data from experiments, models, and simulation); and the ever-expanding capability of higher density media to hold greater volumes of data (i.e., data production expands to fill the available storage space). These factors are consequently producing an exponential data growth rate, which will soon (if not already) become an insurmountable technical challenge even with the great advances in computation and algorithms. This technical challenge is compounded by the ever-increasing geographic dispersion of important data sources—the data collections are not stored uniformly at a single location, or with a single data model, or in uniform formats and modalities (e.g., images, databases, structured and unstructured files, and XML data sets)—the data are in fact large, distributed, heterogeneous, and complex. The greatest scientific research challenge with these massive distributed data collections is consequently extracting all of the rich information and knowledge content contained therein, thus requiring new approaches to scientific research. This emerging data-intensive and data-oriented approach to scientific research is sometimes called discovery informatics or X-informatics (where X can be any science, such as bio, geo, astro, chem, eco, or anything; Agresti 2003; Gray 2003; Borne 2010). This data-oriented approach to science is now recognized by some (e.g., Mahootian and Eastman 2009; Hey et al. 2009) as the fourth paradigm of research, following (historically) experiment/observation, modeling/analysis, and computational science.

  6. Anomalous yet Brownian.

    PubMed

    Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve

    2009-09-08

    We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.

  7. Exponential propagators for the Schrödinger equation with a time-dependent potential.

    PubMed

    Bader, Philipp; Blanes, Sergio; Kopylov, Nikita

    2018-06-28

    We consider the numerical integration of the Schrödinger equation with a time-dependent Hamiltonian given as the sum of the kinetic energy and a time-dependent potential. Commutator-free (CF) propagators are exponential propagators that have shown to be highly efficient for general time-dependent Hamiltonians. We propose new CF propagators that are tailored for Hamiltonians of the said structure, showing a considerably improved performance. We obtain new fourth- and sixth-order CF propagators as well as a novel sixth-order propagator that incorporates a double commutator that only depends on coordinates, so this term can be considered as cost-free. The algorithms require the computation of the action of exponentials on a vector similar to the well-known exponential midpoint propagator, and this is carried out using the Lanczos method. We illustrate the performance of the new methods on several numerical examples.

  8. In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.

    PubMed

    Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A

    2018-04-01

    Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.

  9. High-Resolution Free-Energy Landscape Analysis of α-Helical Protein Folding: HP35 and Its Double Mutant

    PubMed Central

    2013-01-01

    The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A.2012, 109, 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k0–1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation. PMID:24348206

  10. High-Resolution Free-Energy Landscape Analysis of α-Helical Protein Folding: HP35 and Its Double Mutant.

    PubMed

    Banushkina, Polina V; Krivov, Sergei V

    2013-12-10

    The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A. 2012 , 109 , 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k 0 -1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation.

  11. Pattern analysis of total item score and item response of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative sample of US adults

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.

    2017-01-01

    Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560

  12. Investigation of non-Gaussian effects in the Brazilian option market

    NASA Astrophysics Data System (ADS)

    Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.

    2018-04-01

    An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.

  13. Calculating Formulae of Proportion Factor and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang

    2016-07-01

    Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.

  14. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  15. Echo Statistics of Aggregations of Scatterers in a Random Waveguide: Application to Biologic Sonar Clutter

    DTIC Science & Technology

    2012-09-01

    used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,

  16. The size distribution of Pacific Seamounts

    NASA Astrophysics Data System (ADS)

    Smith, Deborah K.; Jordan, Thomas H.

    1987-11-01

    An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.

  17. Effect of ethanol variation on the internal environment of sol-gel bulk and thin films with aging.

    PubMed

    Gupta, R; Mozumdar, S; Chaudhury, N K

    2005-10-15

    Sol-gel derived bulk and thin films were prepared from different compositions at low pH ( approximately 2.0) containing varying concentrations of ethanol from 15 to 60% at constant water (H(2)O)/tetraethyl-orthosilicate (TEOS) ratio (R=4). The fluorescence microscopic and spectroscopic measurements on fluorescent probe, Hoechst 33258 (H258) entrapped in these compositions were carried out at different days of storage to monitor the effects of concentration of ethanol on the internal environment of sol-gel materials. Fluorescence microscopic observations on sol-gel thin films, prepared by dip coating technique depicted uniform and cracked surface at withdrawal speed 1cm/min (high speed) and 0.1cm/min (low speed) respectively, which did not change during aging. Fluorescence spectral measurements showed emission maximum of H258 at approximately 535 nm in fresh sols at all concentrations of ethanol which depicted slight blue shift to 512 nm during aging in bulk. No such spectral shift has been observed in sol-gel thin films coated at high speed whereas thin films coated at low speed clearly showed an additional band at approximately 404 nm at 45 and 60% concentration of ethanol after about one month of storage. Analysis of the fluorescence lifetime data indicated single exponential decay (1.6-1.8 ns) in fresh sol and from third day onwards, invariably double exponential decay with a short (tau(1)) and a long (tau(2)) component were observed in sol-gel bulk with a dominant tau(1) at approximately 1.2 ns at all concentrations of ethanol. A double exponential decay consisting of a short component (tau(1)) at approximately 0.2 ns and a long component (tau(2)) at approximately 3.5 ns were observed at all ethanol concentrations in both fresh and aged sol-gel thin films. Further, distribution analysis of lifetimes of H258 showed two mean lifetimes with increased width in aged bulk and thin films. These results are likely to have strong implications in designing the internal environment for applications in biosensors.

  18. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  19. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  20. Not all nonnormal distributions are created equal: Improved theoretical and measurement precision.

    PubMed

    Joo, Harry; Aguinis, Herman; Bradley, Kyle J

    2017-07-01

    We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. NMR investigation of the short-chain ionic surfactant-water systems.

    PubMed

    Popova, M V; Tchernyshev, Y S; Michel, D

    2004-02-03

    The structure and dynamics of surfactant molecules [CH3(CH2)7COOK] in heavy water solutions were investigated by 1H and 2H NMR. A double-exponential attenuation of the spin-echo amplitude in a Carr-Purcell-Meiboom-Gill experiment was found. We expect correspondence to both bounded and monomeric states. At high concentrations in the NMR self-diffusion measurements also a double-exponential decay of the spin-echo signal versus the square of the dc magnetic gradient was observed. The slow component of the diffusion process is caused by micellar aggregates, while the fast component is the result of the self-diffusion of the monomers through the micelles. The self-diffusion studies indicate that the form of micelles changes with increasing total surfactant concentration. The critical temperature range for self-association is reflected in the 1H transverse relaxation.

  2. A U-shaped linear ultrasonic motor using longitudinal vibration transducers with double feet.

    PubMed

    Liu, Yingxiang; Liu, Junkao; Chen, Weishan; Shi, Shengjun

    2012-05-01

    A U-shaped linear ultrasonic motor using longitudinal vibration transducers with double feet was proposed in this paper. The proposed motor contains a horizontal transducer and two vertical transducers. The horizontal transducer includes two exponential shape horns located at the leading ends, and each vertical transducer contains one exponential shape horn. The horns of the horizontal transducer and the vertical transducer intersect at the tip ends where the driving feet are located. Longitudinal vibrations are superimposed in the motor and generate elliptical motions at the driving feet. The two vibration modes of the motor are discussed, and the motion trajectories of driving feet are deduced. By adjusting the structural parameters, the resonance frequencies of two vibration modes were degenerated. A prototype motor was fabricated and measured. Typical output of the prototype is no-load speed of 854 mm/s and maximum thrust force of 40 N at a voltage of 200 V(rms).

  3. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  4. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  5. Universality in stochastic exponential growth.

    PubMed

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  6. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  7. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  8. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  9. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  10. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  11. The shock waves in decaying supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Smith, M. D.; Mac Low, M.-M.; Zuev, J. M.

    2000-04-01

    We here analyse numerical simulations of supersonic, hypersonic and magnetohydrodynamic turbulence that is free to decay. Our goals are to understand the dynamics of the decay and the characteristic properties of the shock waves produced. This will be useful for interpretation of observations of both motions in molecular clouds and sources of non-thermal radiation. We find that decaying hypersonic turbulence possesses an exponential tail of fast shocks and an exponential decay in time, i.e. the number of shocks is proportional to t exp (-ktv) for shock velocity jump v and mean initial wavenumber k. In contrast to the velocity gradients, the velocity Probability Distribution Function remains Gaussian with a more complex decay law. The energy is dissipated not by fast shocks but by a large number of low Mach number shocks. The power loss peaks near a low-speed turn-over in an exponential distribution. An analytical extension of the mapping closure technique is able to predict the basic decay features. Our analytic description of the distribution of shock strengths should prove useful for direct modeling of observable emission. We note that an exponential distribution of shocks such as we find will, in general, generate very low excitation shock signatures.

  12. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    USGS Publications Warehouse

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.

  13. Universal patterns of inequality

    NASA Astrophysics Data System (ADS)

    Banerjee, Anand; Yakovenko, Victor M.

    2010-07-01

    Probability distributions of money, income and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced. The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for the USA. We show that the increase in income inequality in the USA originates primarily from the increase in the income fraction going to the upper tail, which now exceeds 20% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000 and 2005, we discuss the effect of globalization on the inequality of energy consumption.

  14. Study on probability distributions for evolution in modified extremal optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian

    2010-05-01

    It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.

  15. A fractal process of hydrogen diffusion in a-Si:H with exponential energy distribution

    NASA Astrophysics Data System (ADS)

    Hikita, Harumi; Ishikawa, Hirohisa; Morigaki, Kazuo

    2017-04-01

    Hydrogen diffusion in a-Si:H with exponential distribution of the states in energy exhibits the fractal structure. It is shown that a probability P(t) of the pausing time t has a form of tα (α: fractal dimension). It is shown that the fractal dimension α = Tr/T0 (Tr: hydrogen temperature, T0: a temperature corresponding to the width of exponential distribution of the states in energy) is in agreement with the Hausdorff dimension. A fractal graph for the case of α ≤ 1 is like the Cantor set. A fractal graph for the case of α > 1 is like the Koch curves. At α = ∞, hydrogen migration exhibits Brownian motion. Hydrogen diffusion in a-Si:H should be the fractal process.

  16. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  17. Weblog patterns and human dynamics with decreasing interest

    NASA Astrophysics Data System (ADS)

    Guo, J.-L.; Fan, C.; Guo, Z.-H.

    2011-06-01

    In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.

  18. Three-Dimensional Flow of Nanofluid Induced by an Exponentially Stretching Sheet: An Application to Solar Energy

    PubMed Central

    Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.

    2015-01-01

    This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857

  19. Velocity distributions of granular gases with drag and with long-range interactions.

    PubMed

    Kohlstedt, K; Snezhko, A; Sapozhnikov, M V; Aranson, I S; Olafsen, J S; Ben-Naim, E

    2005-08-05

    We study velocity statistics of electrostatically driven granular gases. For two different experiments, (i) nonmagnetic particles in a viscous fluid and (ii) magnetic particles in air, the velocity distribution is non-Maxwellian, and its high-energy tail is exponential, P(upsilon) approximately exp(-/upsilon/). This behavior is consistent with the kinetic theory of driven dissipative particles. For particles immersed in a fluid, viscous damping is responsible for the exponential tail, while for magnetic particles, long-range interactions cause the exponential tail. We conclude that velocity statistics of dissipative gases are sensitive to the fluid environment and to the form of the particle interaction.

  20. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  1. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Effects of resonant magnetic perturbation on the triggering and the evolution of double-tearing mode

    NASA Astrophysics Data System (ADS)

    Wang, L.; Lin, W. B.; Wang, X. Q.

    2018-02-01

    The effects of resonant magnetic perturbation on the triggering and the evolution of the double-tearing mode are investigated by using nonlinear magnetohydrodynamics simulations in a slab geometry. It is found that the double-tearing mode can be destabilized by boundary magnetic perturbation. Moreover, the mode has three typical development stages before it reaches saturation: the linear stable stage, the linear-growth stage, and the exponential-growth stage. The onset and growth of the double-tearing mode significantly depend on the boundary magnetic perturbations, particularly in the early development stage of the mode. The influences of the magnetic perturbation amplitude on the mode for different separations of the two rational surfaces are also discussed.

  3. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  4. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  5. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  6. How bootstrap can help in forecasting time series with more than one seasonal pattern

    NASA Astrophysics Data System (ADS)

    Cordeiro, Clara; Neves, M. Manuela

    2012-09-01

    The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.

  7. Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.

    2014-11-01

    We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.

  8. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  9. Exponential Boundary Observers for Pressurized Water Pipe

    NASA Astrophysics Data System (ADS)

    Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel

    2015-11-01

    This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.

  10. Investigation of the double exponential in the current-voltage characteristics of silicon solar cells. [proton irradiation effects on ATS 1 cells

    NASA Technical Reports Server (NTRS)

    Wolf, M.; Noel, G. T.; Stirn, R. J.

    1977-01-01

    Difficulties in relating observed current-voltage characteristics of individual silicon solar cells to their physical and material parameters were underscored by the unexpected large changes in the current-voltage characteristics telemetered back from solar cells on the ATS-1 spacecraft during their first year in synchronous orbit. Depletion region recombination was studied in cells exhibiting a clear double-exponential dark characteristic by subjecting the cells to proton irradiation. A significant change in the saturation current, an effect included in the Sah, Noyce, Shockley formulation of diode current resulting from recombination in the depletion region, was caused by the introduction of shallow levels in the depletion region by the proton irradiation. This saturation current is not attributable only to diffusion current from outside the depletion region and only its temperature dependence can clarify its origin. The current associated with the introduction of deep-lying levels did not change significantly in these experiments.

  11. 15-digit accuracy calculations of Chandrasekhar's H-function for isotropic scattering by means of the double exponential formula

    NASA Astrophysics Data System (ADS)

    Kawabata, Kiyoshi

    2016-12-01

    This work shows that it is possible to calculate numerical values of the Chandrasekhar H-function for isotropic scattering at least with 15-digit accuracy by making use of the double exponential formula (DE-formula) of Takahashi and Mori (Publ. RIMS, Kyoto Univ. 9:721, 1974) instead of the Gauss-Legendre quadrature employed in the numerical scheme of Kawabata and Limaye (Astrophys. Space Sci. 332:365, 2011) and simultaneously taking a precautionary measure to minimize the effects due to loss of significant digits particularly in the cases of near-conservative scattering and/or errors involved in returned values of library functions supplied by compilers in use. The results of our calculations are presented for 18 selected values of single scattering albedo π0 and 22 values of an angular variable μ, the cosine of zenith angle θ specifying the direction of radiation incident on or emergent from semi-infinite media.

  12. Universality in the distance between two teams in a football tournament

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Dahmen, Silvio R.

    2014-03-01

    Is football (soccer) a universal sport? Beyond the question of geographical distribution, where the answer is most certainly yes, when looked at from a mathematical viewpoint the scoring process during a match can be thought of, in a first approximation, as being modeled by a Poisson distribution. Recently, it was shown that the scoring of real tournaments can be reproduced by means of an agent-based model (da Silva et al. (2013) [24]) based on two simple hypotheses: (i) the ability of a team to win a match is given by the rate of a Poisson distribution that governs its scoring during a match; and (ii) such ability evolves over time according to results of previous matches. In this article we are interested in the question of whether the time series represented by the scores of teams have universal properties. For this purpose we define a distance between two teams as the square root of the sum of squares of the score differences between teams over all rounds in a double-round-robin-system and study how this distance evolves over time. Our results suggest a universal distance distribution of tournaments of different major leagues which is better characterized by an exponentially modified Gaussian (EMG). This result is corroborated by our agent-based model.

  13. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  14. Phenomenology of stochastic exponential growth

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  15. 1/f oscillations in a model of moth populations oriented by diffusive pheromones

    NASA Astrophysics Data System (ADS)

    Barbosa, L. A.; Martins, M. L.; Lima, E. R.

    2005-01-01

    An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.

  16. A study on some urban bus transport networks

    NASA Astrophysics Data System (ADS)

    Chen, Yong-Zhou; Li, Nan; He, Da-Ren

    2007-03-01

    In this paper, we present the empirical investigation results on the urban bus transport networks (BTNs) of four major cities in China. In BTN, nodes are bus stops. Two nodes are connected by an edge when the stops are serviced by a common bus route. The empirical results show that the degree distributions of BTNs take exponential function forms. Other two statistical properties of BTNs are also considered, and they are suggested as the distributions of so-called “the number of stops in a bus route” (represented by S) and “the number of bus routes a stop joins” (by R). The distributions of R also show exponential function forms, while the distributions of S follow asymmetric, unimodal functions. To explain these empirical results and attempt to simulate a possible evolution process of BTN, we introduce a model by which the analytic and numerical result obtained agrees well with the empirical facts. Finally, we also discuss some other possible evolution cases, where the degree distribution shows a power law or an interpolation between the power law and the exponential decay.

  17. Exponential quantum spreading in a class of kicked rotor systems near high-order resonances

    NASA Astrophysics Data System (ADS)

    Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin

    2013-11-01

    Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.

  18. Parameter estimation and order selection for an empirical model of VO2 on-kinetics.

    PubMed

    Alata, O; Bernard, O

    2007-04-27

    In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.

  19. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  20. A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.

    DTIC Science & Technology

    1981-06-01

    Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that

  1. Analysis of the Chinese air route network as a complex network

    NASA Astrophysics Data System (ADS)

    Cai, Kai-Quan; Zhang, Jun; Du, Wen-Bo; Cao, Xian-Bin

    2012-02-01

    The air route network, which supports all the flight activities of the civil aviation, is the most fundamental infrastructure of air traffic management system. In this paper, we study the Chinese air route network (CARN) within the framework of complex networks. We find that CARN is a geographical network possessing exponential degree distribution, low clustering coefficient, large shortest path length and exponential spatial distance distribution that is obviously different from that of the Chinese airport network (CAN). Besides, via investigating the flight data from 2002 to 2010, we demonstrate that the topology structure of CARN is homogeneous, howbeit the distribution of flight flow on CARN is rather heterogeneous. In addition, the traffic on CARN keeps growing in an exponential form and the increasing speed of west China is remarkably larger than that of east China. Our work will be helpful to better understand Chinese air traffic systems.

  2. Colloquium: Statistical mechanics of money, wealth, and income

    NASA Astrophysics Data System (ADS)

    Yakovenko, Victor M.; Rosser, J. Barkley, Jr.

    2009-10-01

    This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential (“thermal”) distribution, whereas a small fraction of the population in the upper class is characterized by the power-law (“superthermal”) distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium.

  3. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  4. Investigation of the double exponential in the current-voltage characteristics of silicon solar cells

    NASA Technical Reports Server (NTRS)

    Wolf, M.; Noel, G. T.; Stirn, R. J.

    1976-01-01

    A theoretical analysis is presented of certain peculiarities of the current-voltage characteristics of silicon solar cells, involving high values of the empirical constant A in the diode equation for a p-n junction. An attempt was made in a lab experiment to demonstrate that the saturation current which is associated with the exponential term qV/A2kT of the I-V characteristic, with A2 roughly equal to 2, originates in the space charge region and that it can be increased, as observed on ATS-1 cells, by the introduction of additional defects through low energy proton irradiation. It was shown that the proton irradiation introduces defects into the space charge region which give rise to a recombination current from this region, although the I-V characteristic is, in this case, dominated by an exponential term which has A = 1.

  5. Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model.

    PubMed

    Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S

    2003-10-01

    Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.

  6. Recognizing Physisorption and Chemisorption in Carbon Nanotubes Gas Sensors by Double Exponential Fitting of the Response.

    PubMed

    Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio

    2016-05-19

    Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.

  7. The dynamics of photoinduced defect creation in amorphous chalcogenides: The origin of the stretched exponential function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, R. J.; Shimakawa, K.; Department of Electrical and Electronic Engineering, Gifu University, Gifu 501-1193

    The article discusses the dynamics of photoinduced defect creations (PDC) in amorphous chalcogenides, which is described by the stretched exponential function (SEF), while the well known photodarkening (PD) and photoinduced volume expansion (PVE) are governed only by the exponential function. It is shown that the exponential distribution of the thermal activation barrier produces the SEF in PDC, suggesting that thermal energy, as well as photon energy, is incorporated in PDC mechanisms. The differences in dynamics among three major photoinduced effects (PD, PVE, and PDC) in amorphous chalcogenides are now well understood.

  8. Interaction quantum quenches in the one-dimensional Fermi-Hubbard model

    NASA Astrophysics Data System (ADS)

    Heidrich-Meisner, Fabian; Bauer, Andreas; Dorfner, Florian; Riegger, Luis; Orso, Giuliano

    2016-05-01

    We discuss the nonequilibrium dynamics in two interaction quantum quenches in the one-dimensional Fermi-Hubbard model. First, we study the decay of the Néel state as a function of interaction strength. We observe a fast charge dynamics over which double occupancies are built up, while the long-time decay of the staggered moment is controlled by spin excitations, corroborated by the analysis of the entanglement dynamics. Second, we investigate the formation of Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) correlations in a spin-imbalanced system in quenches from the noninteracting case to attractive interactions. Even though the quench puts the system at a finite energy density, peaks at the characteristic FFLO quasimomenta are visible in the quasi-momentum distribution function, albeit with an exponential decay of s-wave pairing correlations. We also discuss the imprinting of FFLO correlations onto repulsively bound pairs and their rapid decay in ramps. Supported by the DFG (Deutsche Forschungsgemeinschaft) via FOR 1807.

  9. Global exponential stability of bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Song, Qiankun; Cao, Jinde

    2007-05-01

    A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.

  10. Controllability of a multichannel system

    NASA Astrophysics Data System (ADS)

    Ivanov, Sergei A.; Wang, Jun Min

    2018-02-01

    We consider the system consisting of K coupled acoustic channels with the different sound velocities cj. Channels are interacting at any point via the pressure and its time derivatives. Using the moment approach and the theory of exponential families with vector coefficients we establish two controllability results: the system is exactly controllable if (i) the control uj in the jth channel acts longer than the double travel time of a wave from the start to the end of the j-th channel; (ii) all controls uj act more than or equal to the maximal double travel time.

  11. Fundamentals of Tribology; Proceedings of the International Conference on the Fundamentals of Tribology held at The Massachusetts Institute of Technology, Cambridge, MA

    DTIC Science & Technology

    1978-06-01

    HDL). The locus of electrical centers of hydrated ions in contact with the electrode surface is known as the outer Helmholtz plane ( OHP ) while the...and then a more Crdual exponential decay in the diffuse double layer. The difference in potential between the OHP and the bulk electrolyte, i.e., the...rnntribution of the diffuse double layer, is called the electrokinetic or iC 275 (a) Wc IHP OHP GCL- BULK + + ELECTRO YTE + + + + +G+ + eS+ J f -A -A

  12. Income inequality in Romania: The exponential-Pareto distribution

    NASA Astrophysics Data System (ADS)

    Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan

    2017-03-01

    We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.

  13. On the q-type distributions

    NASA Astrophysics Data System (ADS)

    Nadarajah, Saralees; Kotz, Samuel

    2007-04-01

    Various q-type distributions have appeared in the physics literature in the recent years, see e.g. L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57. It is pointed out in the paper that many of these are the same as or particular cases of what has been known in the statistics literature. Several of these statistical distributions are discussed and references provided. We feel that this paper could be of assistance for modeling problems of the type considered by L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57 and others.

  14. Instabilities and spin-up behaviour of a rotating magnetic field driven flow in a rectangular cavity

    NASA Astrophysics Data System (ADS)

    Galindo, V.; Nauber, R.; Räbiger, D.; Franke, S.; Beyer, H.; Büttner, L.; Czarske, J.; Eckert, S.

    2017-11-01

    This study presents numerical simulations and experiments considering the flow of an electrically conducting fluid inside a cube driven by a rotating magnetic field (RMF). The investigations are focused on the spin-up, where a liquid metal (GaInSn) is suddenly exposed to an azimuthal body force generated by the RMF and the subsequent flow development. The numerical simulations rely on a semi-analytical expression for the induced electromagnetic force density in an electrically conducting medium inside a cuboid container with insulating walls. Velocity distributions in two perpendicular planes are measured using a novel dual-plane, two-component ultrasound array Doppler velocimeter with continuous data streaming, enabling long term measurements for investigating transient flows. This approach allows identifying the main emerging flow modes during the transition from stable to unstable flow regimes with exponentially growing velocity oscillations using the Proper Orthogonal Decomposition method. Characteristic frequencies in the oscillating flow regimes are determined in the super critical range above the critical magnetic Taylor number T ac≈1.26 ×1 05, where the transition from the steady double vortex structure of the secondary flow to an unstable regime with exponentially growing oscillations is detected. The mean flow structures and the temporal evolution of the flow predicted by the numerical simulations and observed in experiments are in very good agreement.

  15. Role of the locus coeruleus in the emergence of power law wake bouts in a model of the brainstem sleep-wake system through early infancy.

    PubMed

    Patel, Mainak; Rangan, Aaditya

    2017-08-07

    Infant rats randomly cycle between the sleeping and waking states, which are tightly correlated with the activity of mutually inhibitory brainstem sleep and wake populations. Bouts of sleep and wakefulness are random; from P2-P10, sleep and wake bout lengths are exponentially distributed with increasing means, while during P10-P21, the sleep bout distribution remains exponential while the distribution of wake bouts gradually transforms to power law. The locus coeruleus (LC), via an undeciphered interaction with sleep and wake populations, has been shown experimentally to be responsible for the exponential to power law transition. Concurrently during P10-P21, the LC undergoes striking physiological changes - the LC exhibits strong global 0.3 Hz oscillations up to P10, but the oscillation frequency gradually rises and synchrony diminishes from P10-P21, with oscillations and synchrony vanishing at P21 and beyond. In this work, we construct a biologically plausible Wilson Cowan-style model consisting of the LC along with sleep and wake populations. We show that external noise and strong reciprocal inhibition can lead to switching between sleep and wake populations and exponentially distributed sleep and wake bout durations as during P2-P10, with the parameters of inhibition between the sleep and wake populations controlling mean bout lengths. Furthermore, we show that the changing physiology of the LC from P10-P21, coupled with reciprocal excitation between the LC and wake population, can explain the shift from exponential to power law of the wake bout distribution. To our knowledge, this is the first study that proposes a plausible biological mechanism, which incorporates the known changing physiology of the LC, for tying the developing sleep-wake circuit and its interaction with the LC to the transformation of sleep and wake bout dynamics from P2-P21. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Time-dependent breakdown of fiber networks: Uncertainty of lifetime

    NASA Astrophysics Data System (ADS)

    Mattsson, Amanda; Uesaka, Tetsu

    2017-05-01

    Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.

  17. Statistical mechanics of money and income

    NASA Astrophysics Data System (ADS)

    Dragulescu, Adrian; Yakovenko, Victor

    2001-03-01

    Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.

  18. Heavy tailed bacterial motor switching statistics define macroscopic transport properties during upstream contamination by E. coli

    NASA Astrophysics Data System (ADS)

    Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.

    The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.

  19. Redshift data and statistical inference

    NASA Technical Reports Server (NTRS)

    Newman, William I.; Haynes, Martha P.; Terzian, Yervant

    1994-01-01

    Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.

  20. Human mobility in space from three modes of public transportation

    NASA Astrophysics Data System (ADS)

    Jiang, Shixiong; Guan, Wei; Zhang, Wenyi; Chen, Xu; Yang, Liu

    2017-10-01

    The human mobility patterns have drew much attention from researchers for decades, considering about its importance for urban planning and traffic management. In this study, the taxi GPS trajectories, smart card transaction data of subway and bus from Beijing are utilized to model human mobility in space. The original datasets are cleaned and processed to attain the displacement of each trip according to the origin and destination locations. Then, the Akaike information criterion is adopted to screen out the best fitting distribution for each mode from candidate ones. The results indicate that displacements of taxi trips follow the exponential distribution. Besides, the exponential distribution also fits displacements of bus trips well. However, their exponents are significantly different. Displacements of subway trips show great specialties and can be well fitted by the gamma distribution. It is obvious that human mobility of each mode is different. To explore the overall human mobility, the three datasets are mixed up to form a fusion dataset according to the annual ridership proportions. Finally, the fusion displacements follow the power-law distribution with an exponential cutoff. It is innovative to combine different transportation modes to model human mobility in the city.

  1. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  2. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.

  3. Periodic bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde

    2006-05-01

    Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.

  4. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  5. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  6. A mathematical model for evolution and SETI.

    PubMed

    Maccone, Claudio

    2011-12-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor f(l) in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor f(l) is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  7. Modelling Evolution and SETI Mathematically

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2012-05-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factor increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions constrained between the time axis and the exponential growth curve. Finally, since each lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  8. A Mathematical Model for Evolution and SETI

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-12-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  9. Accounting for inherent variability of growth in microbial risk assessment.

    PubMed

    Marks, H M; Coleman, M E

    2005-04-15

    Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.

  10. Computerized Method for the Generation of Molecular Transmittance Functions in the Infrared Region.

    DTIC Science & Technology

    1979-12-31

    exponent of the double exponential function were ’bumpy’ for some cases. Since the nature of the transmittance does not predict this behavior, we...T ,IS RECOMPUTED FOR THE ORIGIONAL DATA *USING THE PIECEWISE- ANALITICAL TRANSMISSION FUNCTION.’//20X, *’STANDARD DEVIATIONS BETWEEN THE ACTUAL TAU

  11. Skills for the Future.

    ERIC Educational Resources Information Center

    Smith, Gary R.

    This publication contains two miniunits to help students in grades 7-12 build skills for the future. The exercises can also be adapted for use in grades 4-6. Each of the miniunits contains several exercises to build specific skills. Miniunit One, "The Arithmetic of Growth," deals with two concepts--exponential growth and doubling time. These two…

  12. Graphical analysis for gel morphology II. New mathematical approach for stretched exponential function with β>1

    NASA Astrophysics Data System (ADS)

    Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu

    2005-10-01

    A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.

  13. Ray-theory approach to electrical-double-layer interactions.

    PubMed

    Schnitzer, Ory

    2015-02-01

    A novel approach is presented for analyzing the double-layer interaction force between charged particles in electrolyte solution, in the limit where the Debye length is small compared with both interparticle separation and particle size. The method, developed here for two planar convex particles of otherwise arbitrary geometry, yields a simple asymptotic approximation limited to neither small zeta potentials nor the "close-proximity" assumption underlying Derjaguin's approximation. Starting from the nonlinear Poisson-Boltzmann formulation, boundary-layer solutions describing the thin diffuse-charge layers are asymptotically matched to a WKBJ expansion valid in the bulk, where the potential is exponentially small. The latter expansion describes the bulk potential as superposed contributions conveyed by "rays" emanating normally from the boundary layers. On a special curve generated by the centers of all circles maximally inscribed between the two particles, the bulk stress-associated with the ray contributions interacting nonlinearly-decays exponentially with distance from the center of the smallest of these circles. The force is then obtained by integrating the traction along this curve using Laplace's method. We illustrate the usefulness of our theory by comparing it, alongside Derjaguin's approximation, with numerical simulations in the case of two parallel cylinders at low potentials. By combining our result and Derjaguin's approximation, the interaction force is provided at arbitrary interparticle separations. Our theory can be generalized to arbitrary three-dimensional geometries, nonideal electrolyte models, and other physical scenarios where exponentially decaying fields give rise to forces.

  14. Chronology of Postglacial Eruptive Activity and Calculation of Eruption Probabilities for Medicine Lake Volcano, Northern California

    USGS Publications Warehouse

    Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.

    2007-01-01

    Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.

  15. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  16. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  17. Heterogeneous characters modeling of instant message services users’ online behavior

    PubMed Central

    Fang, Yajun; Horn, Berthold

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327

  18. Heterogeneous characters modeling of instant message services users' online behavior.

    PubMed

    Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.

  19. Impact of oxide thickness on the density distribution of near-interface traps in 4H-SiC MOS capacitors

    NASA Astrophysics Data System (ADS)

    Zhang, Xufang; Okamoto, Dai; Hatakeyama, Tetsuo; Sometani, Mitsuru; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi

    2018-06-01

    The impact of oxide thickness on the density distribution of near-interface traps (NITs) in SiO2/4H-SiC structure was investigated. We used the distributed circuit model that had successfully explained the frequency-dependent characteristics of both capacitance and conductance under strong accumulation conditions for SiO2/4H-SiC MOS capacitors with thick oxides by assuming an exponentially decaying distribution of NITs. In this work, it was found that the exponentially decaying distribution is the most plausible approximation of the true NIT distribution because it successfully explained the frequency dependences of capacitance and conductance under strong accumulation conditions for various oxide thicknesses. The thickness dependence of the NIT density distribution was also characterized. It was found that the NIT density increases with increasing oxide thickness, and a possible physical reason was discussed.

  20. Force Measurements of Single and Double Barrier DBD Plasma Actuators in Quiescent Air

    NASA Technical Reports Server (NTRS)

    Hoskinson, Alan R.; Hershkowitz, Noah; Ashpis, David E.

    2008-01-01

    We have performed measurements of the force induced by both single (one electrode insulated) and double (both electrodes insulated) dielectric barrier discharge plasma actuators in quiescent air. We have shown that, for single barrier actuators, as the electrode diameter decreased below those values previously studied the induced Force increases exponentially rather than linearly. This behavior has been experimentally verified using two different measurement techniques: stagnation probe measurements of the induced flow velocity and direct measurement of the force using an electronic balance. In addition, we have shown the the induced force is independent of the material used for the exposed electrode. The same techniques have shown that the induced force of a double barrier actuator increases with decreasing narrow electrode diameter.

  1. U-shaped, double-tapered, fiber-optic sensor for effective biofilm growth monitoring.

    PubMed

    Zhong, Nianbing; Zhao, Mingfu; Li, Yishan

    2016-02-01

    To monitor biofilm growth on polydimethylsiloxane in a photobioreactor effectively, the biofilm cells and liquids were separated and measured using a sensor with two U-shaped, double-tapered, fiber-optic probes (Sen. and Ref. probes). The probes' Au-coated hemispherical tips enabled double-pass evanescent field absorption. The Sen. probe sensed the cells and liquids inside the biofilm. The polyimide-silica hybrid-film-coated Ref. probe separated the liquids from the biofilm cells and analyzed the liquid concentration. The biofilm structure and active biomass were also examined to confirm the effectiveness of the measurement using a simulation model. The sensor was found to effectively respond to the biofilm growth in the adsorption through exponential phases at thicknesses of 0-536 μm.

  2. First off-time treatment prostate-specific antigen kinetics predicts survival in intermittent androgen deprivation for prostate cancer.

    PubMed

    Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier

    2016-01-01

    Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.

  3. Scaling behavior of sleep-wake transitions across species

    NASA Astrophysics Data System (ADS)

    Lo, Chung-Chuan; Chou, Thomas; Ivanov, Plamen Ch.; Penzel, Thomas; Mochizuki, Takatoshi; Scammell, Thomas; Saper, Clifford B.; Stanley, H. Eugene

    2003-03-01

    Uncovering the mechanisms controlling sleep is a fascinating scientific challenge. It can be viewed as transitions of states of a very complex system, the brain. We study the time dynamics of short awakenings during sleep for three species: humans, rats and mice. We find, for all three species, that wake durations follow a power-law distribution, and sleep durations follow exponential distributions. Surprisingly, all three species have the same power-law exponent for the distribution of wake durations, but the exponential time scale of the distributions of sleep durations varies across species. We suggest that the dynamics of short awakenings are related to species-independent fluctuations of the system, while the dynamics of sleep is related to system-dependent mechanisms which change with species.

  4. Jet Formation and Penetration Study of Double-Layer Shaped Charge

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Jiang, Jian-Wei; Wang, Shu-You; Liu, Han

    2018-04-01

    A theoretical analysis on detonation wave propagation in a double-layer shaped charge (DLSC) is performed. Numerical simulations using the AUTODYN software are carried out to compare the distinctions between jet formations in DLSC and ordinary shaped charge (OSC), in particular, the OSC made using a higher detonation velocity explosive, which is treated as the outer layer charge in the DLSC. The results show that the improved detonation velocity ratio and radial charge percentage of outer-to-inner layer charge are conducive to the formation of a convergent detonation wave, which contributes to enhancement of jet tip velocity in DLSC. The thickness and mass percentages of liner flowing into jet in DLSC closely follow the exponential distribution along the radial direction, but the percentages in DLSC and the mass of effective jet, which have significant influence on the penetration depth, are lower than those in OSC with the outer layer charge. This implies that the total charge energy is the major factor controlling the effective jet formation, which is confirmed by the verification tests using flash X-ray system and following penetration tests. The numerical simulation and test results compare well, while penetration test results indicate that the performance of DLSC is not better than that of OSC with the outer layer charge, due to the differences in jet formation.

  5. Organic/inorganic hybrid synaptic transistors gated by proton conducting methylcellulose films

    NASA Astrophysics Data System (ADS)

    Wan, Chang Jin; Zhu, Li Qiang; Wan, Xiang; Shi, Yi; Wan, Qing

    2016-01-01

    The idea of building a brain-inspired cognitive system has been around for several decades. Recently, electric-double-layer transistors gated by ion conducting electrolytes were reported as the promising candidates for synaptic electronics and neuromorphic system. In this letter, indium-zinc-oxide transistors gated by proton conducting methylcellulose electrolyte films were experimentally demonstrated with synaptic plasticity including paired-pulse facilitation and spatiotemporal-correlated dynamic logic. More importantly, a model based on proton-related electric-double-layer modulation and stretched-exponential decay function was proposed, and the theoretical results are in good agreement with the experimentally measured synaptic behaviors.

  6. Organic/inorganic hybrid synaptic transistors gated by proton conducting methylcellulose films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Chang Jin; Wan, Qing, E-mail: wanqing@nju.edu.cn, E-mail: yshi@nju.edu.cn; Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201

    The idea of building a brain-inspired cognitive system has been around for several decades. Recently, electric-double-layer transistors gated by ion conducting electrolytes were reported as the promising candidates for synaptic electronics and neuromorphic system. In this letter, indium-zinc-oxide transistors gated by proton conducting methylcellulose electrolyte films were experimentally demonstrated with synaptic plasticity including paired-pulse facilitation and spatiotemporal-correlated dynamic logic. More importantly, a model based on proton-related electric-double-layer modulation and stretched-exponential decay function was proposed, and the theoretical results are in good agreement with the experimentally measured synaptic behaviors.

  7. Transition from the Unipolar Region to the Sector Zone: Voyager 2, 2013 and 2014

    NASA Astrophysics Data System (ADS)

    Burlaga, L. F.; Ness, N. F.; Richardson, J. D.

    2017-05-01

    We discuss magnetic field and plasma observations of the heliosheath made by Voyager 2 (V2) during 2013 and 2014 near solar maximum. A transition from a unipolar region to a sector zone was observed in the azimuthal angle λ between ˜2012.45 and 2013.82. The distribution of λ was strongly singly peaked at 270^\\circ in the unipolar region and double peaked in the sector zone. The δ-distribution was strongly peaked in the unipolar region and very broad in the sector zone. The distribution of daily averages of the magnetic field strength B was Gaussian in the unipolar region and lognormal in the sector zone. The correlation function of B was exponential with an e-folding time of ˜5 days in both regions. The distribution of hourly increments of B was a Tsallis distribution with nonextensivity parameter q = 1.7 ± 0.04 in the unipolar region and q = 1.44 ± 0.12 in the sector zone. The CR-B relationship qualitatively describes the 2013 observations, but not the 2014 observations. A 40 km s-1 increase in the bulk speed associated with an increase in B near 2013.5 might have been produced by the merging of streams. A “D sheet” (a broad depression in B containing a current sheet moved past V2 from days 320 to 345, 2013. The R- and N-components of the plasma velocity changed across the current sheet.

  8. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  9. Historical Patterns of Change: The Lessons of the 1980s.

    ERIC Educational Resources Information Center

    Geiger, Roger L.

    This paper seeks to assess the current state of academic research in light of long-term trends in the development of science. It presents three perspectives on the growth of scientific research: (1) Derek de Solla Price's (1963) hypothesis that science has exhibited exponential growth, roughly doubling every 15 years since the 17th century; (2)…

  10. Global synchronization of memristive neural networks subject to random disturbances via distributed pinning control.

    PubMed

    Guo, Zhenyuan; Yang, Shaofu; Wang, Jun

    2016-12-01

    This paper presents theoretical results on global exponential synchronization of multiple memristive neural networks in the presence of external noise by means of two types of distributed pinning control. The multiple memristive neural networks are coupled in a general structure via a nonlinear function, which consists of a linear diffusive term and a discontinuous sign term. A pinning impulsive control law is introduced in the coupled system to synchronize all neural networks. Sufficient conditions are derived for ascertaining global exponential synchronization in mean square. In addition, a pinning adaptive control law is developed to achieve global exponential synchronization in mean square. Both pinning control laws utilize only partial state information received from the neighborhood of the controlled neural network. Simulation results are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Bonus-Malus System with the Claim Frequency Distribution is Geometric and the Severity Distribution is Truncated Weibull

    NASA Astrophysics Data System (ADS)

    Santi, D. N.; Purnaba, I. G. P.; Mangku, I. W.

    2016-01-01

    Bonus-Malus system is said to be optimal if it is financially balanced for insurance companies and fair for policyholders. Previous research about Bonus-Malus system concern with the determination of the risk premium which applied to all of the severity that guaranteed by the insurance company. In fact, not all of the severity that proposed by policyholder may be covered by insurance company. When the insurance company sets a maximum bound of the severity incurred, so it is necessary to modify the model of the severity distribution into the severity bound distribution. In this paper, optimal Bonus-Malus system is compound of claim frequency component has geometric distribution and severity component has truncated Weibull distribution is discussed. The number of claims considered to follow a Poisson distribution, and the expected number λ is exponentially distributed, so the number of claims has a geometric distribution. The severity with a given parameter θ is considered to have a truncated exponential distribution is modelled using the Levy distribution, so the severity have a truncated Weibull distribution.

  12. Exponential model for option prices: Application to the Brazilian market

    NASA Astrophysics Data System (ADS)

    Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.

    2016-03-01

    In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.

  13. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  14. U-shaped, double-tapered, fiber-optic sensor for effective biofilm growth monitoring

    PubMed Central

    Zhong, Nianbing; Zhao, Mingfu; Li, Yishan

    2016-01-01

    To monitor biofilm growth on polydimethylsiloxane in a photobioreactor effectively, the biofilm cells and liquids were separated and measured using a sensor with two U-shaped, double-tapered, fiber-optic probes (Sen. and Ref. probes). The probes’ Au-coated hemispherical tips enabled double-pass evanescent field absorption. The Sen. probe sensed the cells and liquids inside the biofilm. The polyimide–silica hybrid-film-coated Ref. probe separated the liquids from the biofilm cells and analyzed the liquid concentration. The biofilm structure and active biomass were also examined to confirm the effectiveness of the measurement using a simulation model. The sensor was found to effectively respond to the biofilm growth in the adsorption through exponential phases at thicknesses of 0–536 μm. PMID:26977344

  15. A hybrid MD-kMC algorithm for folding proteins in explicit solvent.

    PubMed

    Peter, Emanuel Karl; Shea, Joan-Emma

    2014-04-14

    We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.

  16. Characteristics of a Linearly Tapered Slot Antenna (LTSA) Conformed Longitudinally Around a Cylinder

    NASA Technical Reports Server (NTRS)

    Jordan, Jennifer L.; Ponchak, George E.; Tavassolian, Negar; Tentzeris, Manos M.

    2007-01-01

    The family of tapered slot antennas (TSA s) is suitable for numerous applications. Their ease of fabrication, wide bandwidth, and high gain make them desirable for military and commercial systems. Fabrication on thin, flexible substrates allows the TSA to be conformed over a given body, such as an aircraft wing or a piece of clothing for wearable networks. Previously, a Double Exponentially Tapered Slot Antenna (DETSA) was conformed around an exponential curvature, which showed that the main beam skewed towards the direction of curvature. This paper presents a Linearly Tapered Slot Antenna (LTSA) conformed longitudinally around a cylinder. Measured and simulated radiation patterns and the direction of maximum H co-polarization (Hco) as a function of the cylinder radius are presented.

  17. Analysis of two production inventory systems with buffer, retrials and different production rates

    NASA Astrophysics Data System (ADS)

    Jose, K. P.; Nair, Salini S.

    2017-09-01

    This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.

  18. A non-Boltzmannian behavior of the energy distribution for quasi-stationary regimes of the Fermi–Pasta–Ulam β system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leo, Mario, E-mail: mario.leo@le.infn.it; Leo, Rosario Antonio, E-mail: leora@le.infn.it; Tempesta, Piergiulio, E-mail: p.tempesta@fis.ucm.es

    2013-06-15

    In a recent paper [M. Leo, R.A. Leo, P. Tempesta, C. Tsallis, Phys. Rev. E 85 (2012) 031149], the existence of quasi-stationary states for the Fermi–Pasta–Ulam β system has been shown numerically, by analyzing the stability properties of the N/4-mode exact nonlinear solution. Here we study the energy distribution of the modes N/4, N/3 and N/2, when they are unstable, as a function of N and of the initial excitation energy. We observe that the classical Boltzmann weight is replaced by a different weight, expressed by a q-exponential function. -- Highlights: ► New statistical properties of the Fermi–Pasta–Ulam beta systemmore » are found. ► The energy distribution of specific observables are studied: a deviation from the standard Boltzmann behavior is found. ► A q-exponential weight should be used instead. ► The classical exponential weight is restored in the large particle limit (mesoscopic nature of the phenomenon)« less

  19. Statistical analyses support power law distributions found in neuronal avalanches.

    PubMed

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  20. Spatial analysis of soil organic carbon in Zhifanggou catchment of the Loess Plateau.

    PubMed

    Li, Mingming; Zhang, Xingchang; Zhen, Qing; Han, Fengpeng

    2013-01-01

    Soil organic carbon (SOC) reflects soil quality and plays a critical role in soil protection, food safety, and global climate changes. This study involved grid sampling at different depths (6 layers) between 0 and 100 cm in a catchment. A total of 1282 soil samples were collected from 215 plots over 8.27 km(2). A combination of conventional analytical methods and geostatistical methods were used to analyze the data for spatial variability and soil carbon content patterns. The mean SOC content in the 1282 samples from the study field was 3.08 g · kg(-1). The SOC content of each layer decreased with increasing soil depth by a power function relationship. The SOC content of each layer was moderately variable and followed a lognormal distribution. The semi-variograms of the SOC contents of the six different layers were fit with the following models: exponential, spherical, exponential, Gaussian, exponential, and exponential, respectively. A moderate spatial dependence was observed in the 0-10 and 10-20 cm layers, which resulted from stochastic and structural factors. The spatial distribution of SOC content in the four layers between 20 and 100 cm exhibit were mainly restricted by structural factors. Correlations within each layer were observed between 234 and 562 m. A classical Kriging interpolation was used to directly visualize the spatial distribution of SOC in the catchment. The variability in spatial distribution was related to topography, land use type, and human activity. Finally, the vertical distribution of SOC decreased. Our results suggest that the ordinary Kriging interpolation can directly reveal the spatial distribution of SOC and the sample distance about this study is sufficient for interpolation or plotting. More research is needed, however, to clarify the spatial variability on the bigger scale and better understand the factors controlling spatial variability of soil carbon in the Loess Plateau region.

  1. Non-extensive quantum statistics with particle-hole symmetry

    NASA Astrophysics Data System (ADS)

    Biró, T. S.; Shen, K. M.; Zhang, B. W.

    2015-06-01

    Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.

  2. Wealth distribution, Pareto law, and stretched exponential decay of money: Computer simulations analysis of agent-based models

    NASA Astrophysics Data System (ADS)

    Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf

    2018-01-01

    We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.

  3. Crack problem in superconducting cylinder with exponential distribution of critical-current density

    NASA Astrophysics Data System (ADS)

    Zhao, Yufeng; Xu, Chi; Shi, Liang

    2018-04-01

    The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.

  4. How extreme are extremes?

    NASA Astrophysics Data System (ADS)

    Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro

    2016-04-01

    High temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. Heat-wave indicators have been mainly developed with the aim of capturing the potential impacts on specific sectors (agriculture, health, wildfires, transport, power generation and distribution). However, the ability to capture the occurrence of extreme temperature events is an essential property of a multi-hazard extreme climate indicator. Aim of this study is to develop a standardized heat-wave indicator, that can be combined with other indices in order to describe multiple hazards in a single indicator. The proposed approach can be used in order to have a quantified indicator of the strenght of a certain extreme. As a matter of fact, extremes are usually distributed in exponential or exponential-exponential functions and it is difficult to quickly asses how strong was an extreme events considering only its magnitude. The proposed approach simplify the quantitative and qualitative communication of extreme magnitude

  5. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  6. Effect of the state of internal boundaries on granite fracture nature under quasi-static compression

    NASA Astrophysics Data System (ADS)

    Damaskinskaya, E. E.; Panteleev, I. A.; Kadomtsev, A. G.; Naimark, O. B.

    2017-05-01

    Based on an analysis of the spatial distribution of hypocenters of acoustic emission signal sources and an analysis of the energy distributions of acoustic emission signals, the effect of the liquid phase and a weak electric field on the spatiotemporal nature of granite sample fracture is studied. Experiments on uniaxial compression of granite samples of natural moisture showed that the damage accumulation process is twostage: disperse accumulation of damages is followed by localized accumulation of damages in the formed macrofracture nucleus region. In energy distributions of acoustic emission signals, this transition is accompanied by a change in the distribution shape from exponential to power-law. Granite water saturation qualitatively changes the damage accumulation nature: the process is delocalized until macrofracture with the exponential energy distribution of acoustic emission signals. An exposure to a weak electric field results in a selective change in the damage accumulation nature in the sample volume.

  7. Turbulence hierarchy in a random fibre laser

    PubMed Central

    González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman

    2017-01-01

    Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    Rank distributions are collections of positive sizes ordered either increasingly or decreasingly. Many decreasing rank distributions, formed by the collective collaboration of human actions, follow an inverse power-law relation between ranks and sizes. This remarkable empirical fact is termed Zipf’s law, and one of its quintessential manifestations is the demography of human settlements — which exhibits a harmonic relation between ranks and sizes. In this paper we present a comprehensive statistical-physics analysis of rank distributions, establish that power-law and exponential rank distributions stand out as optimal in various entropy-based senses, and unveil the special role of the harmonic relation betweenmore » ranks and sizes. Our results extend the contemporary entropy-maximization view of Zipf’s law to a broader, panoramic, Gibbsian perspective of increasing and decreasing power-law and exponential rank distributions — of which Zipf’s law is one out of four pillars.« less

  9. A coupled cluster theory with iterative inclusion of triple excitations and associated equation of motion formulation for excitation energy and ionization potential

    NASA Astrophysics Data System (ADS)

    Maitra, Rahul; Akinaga, Yoshinobu; Nakajima, Takahito

    2017-08-01

    A single reference coupled cluster theory that is capable of including the effect of connected triple excitations has been developed and implemented. This is achieved by regrouping the terms appearing in perturbation theory and parametrizing through two different sets of exponential operators: while one of the exponentials, involving general substitution operators, annihilates the ground state but has a non-vanishing effect when it acts on the excited determinant, the other is the regular single and double excitation operator in the sense of conventional coupled cluster theory, which acts on the Hartree-Fock ground state. The two sets of operators are solved as coupled non-linear equations in an iterative manner without significant increase in computational cost than the conventional coupled cluster theory with singles and doubles excitations. A number of physically motivated and computationally advantageous sufficiency conditions are invoked to arrive at the working equations and have been applied to determine the ground state energies of a number of small prototypical systems having weak multi-reference character. With the knowledge of the correlated ground state, we have reconstructed the triple excitation operator and have performed equation of motion with coupled cluster singles, doubles, and triples to obtain the ionization potential and excitation energies of these molecules as well. Our results suggest that this is quite a reasonable scheme to capture the effect of connected triple excitations as long as the ground state remains weakly multi-reference.

  10. Bridging the Gap between Curriculum Planning Policies and Pre-Service Teachers' Needs

    ERIC Educational Resources Information Center

    Castro-Garces, Angela Yicely; Arboleda, Argemiro Arboleda

    2017-01-01

    The challenge and satisfaction of being a teacher is doubled when one has the precious task of being a teacher trainer, as our practices replicate exponentially, touching the lives of people we do not even get to meet. Accordingly, this article presents the analysis of a process that brought tensions to a teacher training program because of the…

  11. The Exponential Growth of Mathematics and Technology at the University of Portsmouth

    ERIC Educational Resources Information Center

    McCabe, Michael

    2009-01-01

    The number of students studying university mathematics in the UK has been increasing gradually and linearly since 2002. At the University of Portsmouth, number of students studying mathematics doubled from 30 to 60 between 2002 and 2007, then increased by 240% in just 1 year to over 140 in 2008. This article explains how learning technology has…

  12. Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors.

    PubMed

    Peterson, Christine; Vannucci, Marina; Karakas, Cemal; Choi, William; Ma, Lihua; Maletić-Savatić, Mirjana

    2013-10-01

    Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation.

  13. Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors

    PubMed Central

    PETERSON, CHRISTINE; VANNUCCI, MARINA; KARAKAS, CEMAL; CHOI, WILLIAM; MA, LIHUA; MALETIĆ-SAVATIĆ, MIRJANA

    2014-01-01

    Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation. PMID:24533172

  14. DOUBLE-EXPONENTIAL FITTING FUNCTION FOR EVALUATION OF COSMIC-RAY-INDUCED NEUTRON FLUENCE RATE IN ARBITRARY LOCATIONS.

    PubMed

    Li, Huailiang; Yang, Yigang; Wang, Qibiao; Tuo, Xianguo; Julian Henderson, Mark; Courtois, Jérémie

    2017-12-01

    The fluence rate of cosmic-ray-induced neutrons (CRINs) varies with many environmental factors. While many current simulation and experimental studies have focused mainly on the altitude variation, the specific rule that the CRINs vary with geomagnetic cutoff rigidity (which is related to latitude and longitude) was not well considered. In this article, a double-exponential fitting function F=(A1e-A2CR+A3)eB1Al, is proposed to evaluate the CRINs' fluence rate varying with geomagnetic cutoff rigidity and altitude. The fitting R2 can have a value up to 0.9954, and, moreover, the CRINs' fluence rate in an arbitrary location (latitude, longitude and altitude) can be easily evaluated by the proposed function. The field measurements of the CRINs' fluence rate and H*(10) rate in Mt. Emei and Mt. Bowa were carried out using a FHT-762 and LB 6411 neutron prober, respectively, and the evaluation results show that the fitting function agrees well with the measurement results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. An Optimization of Inventory Demand Forecasting in University Healthcare Centre

    NASA Astrophysics Data System (ADS)

    Bon, A. T.; Ng, T. K.

    2017-01-01

    Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.

  16. Global exponential stability analysis on impulsive BAM neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Li, Yao-Tang; Yang, Chang-Bo

    2006-12-01

    Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.

  17. Existence and global exponential stability of periodic solution to BAM neural networks with periodic coefficients and continuously distributed delays

    NASA Astrophysics Data System (ADS)

    Zhou, distributed delays [rapid communication] T.; Chen, A.; Zhou, Y.

    2005-08-01

    By using the continuation theorem of coincidence degree theory and Liapunov function, we obtain some sufficient criteria to ensure the existence and global exponential stability of periodic solution to the bidirectional associative memory (BAM) neural networks with periodic coefficients and continuously distributed delays. These results improve and generalize the works of papers [J. Cao, L. Wang, Phys. Rev. E 61 (2000) 1825] and [Z. Liu, A. Chen, J. Cao, L. Huang, IEEE Trans. Circuits Systems I 50 (2003) 1162]. An example is given to illustrate that the criteria are feasible.

  18. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  19. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme

    NASA Astrophysics Data System (ADS)

    Bakoban, Rana A.

    2017-08-01

    The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.

  20. On new non-modal hydrodynamic stability modes and resulting non-exponential growth rates - a Lie symmetry approach

    NASA Astrophysics Data System (ADS)

    Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan

    2016-11-01

    Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.

  1. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  2. Deuteron spin-lattice relaxation in the presence of an activation energy distribution: application to methanols in zeolite NaX.

    PubMed

    Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M

    2013-02-01

    A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Apparent power-law distributions in animal movements can arise from intraspecific interactions

    PubMed Central

    Breed, Greg A.; Severns, Paul M.; Edwards, Andrew M.

    2015-01-01

    Lévy flights have gained prominence for analysis of animal movement. In a Lévy flight, step-lengths are drawn from a heavy-tailed distribution such as a power law (PL), and a large number of empirical demonstrations have been published. Others, however, have suggested that animal movement is ill fit by PL distributions or contend a state-switching process better explains apparent Lévy flight movement patterns. We used a mix of direct behavioural observations and GPS tracking to understand step-length patterns in females of two related butterflies. We initially found movement in one species (Euphydryas editha taylori) was best fit by a bounded PL, evidence of a Lévy flight, while the other (Euphydryas phaeton) was best fit by an exponential distribution. Subsequent analyses introduced additional candidate models and used behavioural observations to sort steps based on intraspecific interactions (interactions were rare in E. phaeton but common in E. e. taylori). These analyses showed a mixed-exponential is favoured over the bounded PL for E. e. taylori and that when step-lengths were sorted into states based on the influence of harassing conspecific males, both states were best fit by simple exponential distributions. The direct behavioural observations allowed us to infer the underlying behavioural mechanism is a state-switching process driven by intraspecific interactions rather than a Lévy flight. PMID:25519992

  4. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  5. Race, gender and the econophysics of income distribution in the USA

    NASA Astrophysics Data System (ADS)

    Shaikh, Anwar; Papanikolaou, Nikolaos; Wiener, Noe

    2014-12-01

    The econophysics “two-class” theory of Yakovenko and his co-authors shows that the distribution of labor incomes is roughly exponential. This paper extends this result to US subgroups categorized by gender and race. It is well known that Males have higher average incomes than Females, and Whites have higher average incomes than African-Americans. It is also evident that social policies can affect these income gaps. Our surprising finding is that nonetheless intra-group distributions of pre-tax labor incomes are remarkably similar and remain close to exponential. This suggests that income inequality can be usefully addressed by taxation policies, and overall income inequality can be modified by also shifting the balance between labor and property incomes.

  6. Diversity of individual mobility patterns and emergence of aggregated scaling laws

    PubMed Central

    Yan, Xiao-Yong; Han, Xiao-Pu; Wang, Bing-Hong; Zhou, Tao

    2013-01-01

    Uncovering human mobility patterns is of fundamental importance to the understanding of epidemic spreading, urban transportation and other socioeconomic dynamics embodying spatiality and human travel. According to the direct travel diaries of volunteers, we show the absence of scaling properties in the displacement distribution at the individual level,while the aggregated displacement distribution follows a power law with an exponential cutoff. Given the constraint on total travelling cost, this aggregated scaling law can be analytically predicted by the mixture nature of human travel under the principle of maximum entropy. A direct corollary of such theory is that the displacement distribution of a single mode of transportation should follow an exponential law, which also gets supportive evidences in known data. We thus conclude that the travelling cost shapes the displacement distribution at the aggregated level. PMID:24045416

  7. Milne, a routine for the numerical solution of Milne's problem

    NASA Astrophysics Data System (ADS)

    Rawat, Ajay; Mohankumar, N.

    2010-11-01

    The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct. Program summaryProgram title: Milne Catalogue identifier: AEGS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 701 No. of bytes in distributed program, including test data, etc.: 6845 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC under Linux or Windows Operating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XP Classification: 4.11, 21.1, 21.2 Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature. Running time: The test included in the distribution takes a few seconds to run.

  8. Incidence of the Bertillon and Gompertz effects on the outcome of clinical trials

    NASA Astrophysics Data System (ADS)

    Roehner, Bertrand M.

    2014-11-01

    The accounts of medical trials provide very detailed information about the patients’ health conditions. On the contrary, almost no vital data such as marital status or age distribution are usually given. Yet, some of these factors can have a notable impact on the overall death rate, thereby changing the outcome and conclusions of the trial. This paper focuses on two of these variables. The first is marital status; its effect on life expectancy (which will be referred to as the Bertillon effect) may double death rates in all age intervals. The second variable is the age distribution of the oldest patients. Because of the exponential nature of Gompertz’s law changes in the distribution of ages in the oldest age group can have dramatic consequences on the overall number of deaths. One should recall that the death rate at the age of 82 is 40 times higher than at the age of 37. It will be seen that randomization alone can hardly take care of these problems. Appropriate remedies are easy to formulate however. First, the marital status of patients as well as the age distribution of those over 65 should be documented for both study groups. Then, thanks to these data and based on the Bertillon and Gompertz laws, it will become possible to perform appropriate corrections. Such corrections will notably improve the reliability and accuracy of the conclusions, especially in trials which include a large proportion of elderly subjects.

  9. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    PubMed

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  10. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  11. An efficient and accurate technique to compute the absorption, emission, and transmission of radiation by the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.

    1990-01-01

    CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.

  12. Fission and quasifission of composite systems with Z =108 -120 : Transition from heavy-ion reactions involving S and Ca to Ti and Ni ions

    NASA Astrophysics Data System (ADS)

    Kozulin, E. M.; Knyazheva, G. N.; Novikov, K. V.; Itkis, I. M.; Itkis, M. G.; Dmitriev, S. N.; Oganessian, Yu. Ts.; Bogachev, A. A.; Kozulina, N. I.; Harca, I.; Trzaska, W. H.; Ghosh, T. K.

    2016-11-01

    Background: Suppression of compound nucleus formation in the reactions with heavy ions by a quasifission process in dependence on the reaction entrance channel. Purpose: Investigation of fission and quasifission processes in the reactions 36S,48Ca,48Ti , and 64Ni+238U at energies around the Coulomb barrier. Methods: Mass-energy distributions of fissionlike fragments formed in the reaction 48Ti+238U at energies of 247, 258, and 271 MeV have been measured using the double-arm time-of-flight spectrometer CORSET at the U400 cyclotron of the Flerov Laboratory of Nuclear Reactions and compared with mass-energy distributions for the reactions 36S,48Ca,64Ni+238U . Results: The most probable fragment masses as well as total kinetic energies and their dispersions in dependence on the interaction energies have been investigated for asymmetric and symmetric fragments for the studied reactions. The fusion probabilities have been deduced from the analysis of mass-energy distributions. Conclusion: The estimated fusion probability for the reactions S, Ca, Ti, and Ni ions with actinide nuclei shows that it depends exponentially on the mean fissility parameter of the system. For the reactions with actinide nuclei leading to the formation of superheavy elements the fusion probabilities are of several orders of magnitude higher than in the case of cold fusion reactions.

  13. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    NASA Astrophysics Data System (ADS)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  14. Characterization of double diffusive convection step and heat budget in the deep Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Lu, Y.

    2013-12-01

    In this paper, we explore the hydrographic structure and heat budget in deep Canada Basin using data measured with McLane-Moored-Profilers (MMPs), bottom-pressure-recorders (BPRs), and conductivity-temperature-depth (CTD) profilers. From the bottom upward, a homogenous bottom layer and its overlaying double diffusive convection (DDC) steps are well identified at Mooring A (75oN, 150oW). We find that the deep water is in weak diapycnal mixing because the effective diffusivity of the bottom layer is ~1.8×10-5 m 2s-1 while that of the other steps is ~10-6 m 2s-1. The vertical heat flux through DDC steps is evaluated with different methods. We find that the heat flux (0.1-11 mWm-2) is much smaller than geothermal heating (~50 mWm-2), which suggests that the stack of DDC steps acts as a thermal barrier in the deep basin. Moreover, the temporal distributions of temperature and salinity differences across the interface are exponential, while those of heat flux and effective diffusivity are found to be approximately log-normal. Both are the result of strong intermittency. Between 2003 and 2011, temperature fluctuation close to the sea floor distributed asymmetrically and skewed towards positive values, which provides direct indication that geothermal heating is transferred into ocean. Both BPR and CTD data suggest that geothermal heating, not the warming of upper ocean, is the dominant mechanism responsible for the warming of deep water. As the DDC steps prevent the vertical heat transfer, geothermal heating will be unlikely to have significant effect on the middle and upper oceans.

  15. Characterization of double diffusive convection steps and heat budget in the deep Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Zhou, Sheng-Qi; Lu, Yuan-Zheng

    2013-12-01

    In this paper, we explore the hydrographic structure and heat budget in the deep Canada Basin by using data measured with McLane-Moored-Profilers (MMP), bottom pressure recorders (BPR), and conductivity-temperature-depth (CTD) profilers. Upward from the bottom, a homogeneous bottom layer and its overlaying double diffusive convection (DDC) steps are well identified at Mooring A (75°N,150°W). We find that the deep water is in weak diapycnal mixing because the effective diffusivity of the bottom layer is ˜1.8 × 10-5 m2s-1, while that of the other steps is ˜10-6 m2s-1. The vertical heat flux through the DDC steps is evaluated by using different methods. We find that the heat flux (0.1-11 mWm -2) is much smaller than geothermal heating (˜50 mWm -2). This suggests that the stack of DDC steps acts as a thermal barrier in the deep basin. Moreover, the temporal distributions of temperature and salinity differences across the interface are exponential, whereas those of heat flux and effective diffusivity are found to be approximately lognormal. Both are the result of strong intermittency. Between 2003 and 2011, temperature fluctuations close to the sea floor were distributed asymmetrically and skewed toward positive values, which provide a direct observation that geothermal heating was transferred into the ocean. Both BPR and CTD data suggest that geothermal heating and not the warming of the upper ocean is the dominant mechanism responsible for the warming of deep water. As the DDC steps prevent vertical heat transfer, geothermal heating is unlikely to have a significant effect on the middle and upper Arctic Ocean.

  16. Multi-exponential analysis of magnitude MR images using a quantitative multispectral edge-preserving filter.

    PubMed

    Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre

    2003-03-01

    A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.

  17. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  18. Spectral Study of Measles Epidemics: The Dependence of Spectral Gradient on the Population Size of the Community

    NASA Astrophysics Data System (ADS)

    Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi

    2003-02-01

    We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.

  19. Evaluation of Mean and Variance Integrals without Integration

    ERIC Educational Resources Information Center

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  20. Topological Defects in Double Exchange Materials and Anomalous Hall Resistance.

    NASA Astrophysics Data System (ADS)

    Calderón, M. J.; Brey, L.

    2000-03-01

    Recently it has been proposed that the anomalous Hall effect observed in Double Exchange materials is due to Berry phase effects caused by carrier hopping in a nontrivial spins background (J.Ye et al.) Phys.Rev.Lett. 83, 3737 1999.In order to study this possibility we have performed Monte Carlo simulations of the Double Exchange model and we have computed, as a function of the temperature, the number of topological defects in the system and the internal gauge magnetic field associated with these defects. In the simplest Double Exchange model the gauge magnetic field is random, and its average value is zero. The inclusion in the problem of spin-orbit coupling privileges the opposite direction of the magnetization and an anomalous Hall resistance (AHR) effect arises. We have computed the AHR, and we have obtained its temperature dependence. In agreement with previous experiments we obtain that AHR increases exponentially at low temperature and presents a maximum at a temperature slightly higher than the critical temperature.

  1. Intra-Individual Response Variability Assessed by Ex-Gaussian Analysis may be a New Endophenotype for Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco

    2014-01-01

    Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.

  2. Weighted Scaling in Non-growth Random Networks

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li

    2012-09-01

    We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.

  3. Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Chen; Sichitiu, Mihail L.

    Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.

  4. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    PubMed

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  6. Time Correlations in Mode Hopping of Coupled Oscillators

    NASA Astrophysics Data System (ADS)

    Heltberg, Mathias L.; Krishna, Sandeep; Jensen, Mogens H.

    2017-05-01

    We study the dynamics in a system of coupled oscillators when Arnold Tongues overlap. By varying the initial conditions, the deterministic system can be attracted to different limit cycles. Adding noise, the mode hopping between different states become a dominating part of the dynamics. We simplify the system through a Poincare section, and derive a 1D model to describe the dynamics. We explain that for some parameter values of the external oscillator, the time distribution of occupancy in a state is exponential and thus memoryless. In the general case, on the other hand, it is a sum of exponential distributions characteristic of a system with time correlations.

  7. Exponential Stability of Almost Periodic Solutions for Memristor-Based Neural Networks with Distributed Leakage Delays.

    PubMed

    Xu, Changjin; Li, Peiluan; Pang, Yicheng

    2016-12-01

    In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).

  8. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  9. The diffusion of a Ga atom on GaAs(001)β2(2 × 4): Local superbasin kinetic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Lin, Yangzheng; Fichthorn, Kristen A.

    2017-10-01

    We use first-principles density-functional theory to characterize the binding sites and diffusion mechanisms for a Ga adatom on the GaAs(001)β 2(2 × 4) surface. Diffusion in this system is a complex process involving eleven unique binding sites and sixteen different hops between neighboring binding sites. Among the binding sites, we can identify four different superbasins such that the motion between binding sites within a superbasin is much faster than hops exiting the superbasin. To describe diffusion, we use a recently developed local superbasin kinetic Monte Carlo (LSKMC) method, which accelerates a conventional kinetic Monte Carlo (KMC) simulation by describing the superbasins as absorbing Markov chains. We find that LSKMC is up to 4300 times faster than KMC for the conditions probed in this study. We characterize the distribution of exit times from the superbasins and find that these are sometimes, but not always, exponential and we characterize the conditions under which the superbasin exit-time distribution should be exponential. We demonstrate that LSKMC simulations assuming an exponential superbasin exit-time distribution yield the same diffusion coefficients as conventional KMC.

  10. Average BER of subcarrier intensity modulated free space optical systems over the exponentiated Weibull fading channels.

    PubMed

    Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang

    2014-08-25

    The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.

  11. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  12. Analysis of domestic refrigerator temperatures and home storage time distributions for shelf-life studies and food safety risk assessment.

    PubMed

    Roccato, Anna; Uyttendaele, Mieke; Membré, Jeanne-Marie

    2017-06-01

    In the framework of food safety, when mimicking the consumer phase, the storage time and temperature used are mainly considered as single point estimates instead of probability distributions. This singlepoint approach does not take into account the variability within a population and could lead to an overestimation of the parameters. Therefore, the aim of this study was to analyse data on domestic refrigerator temperatures and storage times of chilled food in European countries in order to draw general rules which could be used either in shelf-life testing or risk assessment. In relation to domestic refrigerator temperatures, 15 studies provided pertinent data. Twelve studies presented normal distributions, according to the authors or from the data fitted into distributions. Analysis of temperature distributions revealed that the countries were separated into two groups: northern European countries and southern European countries. The overall variability of European domestic refrigerators is described by a normal distribution: N (7.0, 2.7)°C for southern countries, and, N (6.1, 2.8)°C for the northern countries. Concerning storage times, seven papers were pertinent. Analysis indicated that the storage time was likely to end in the first days or weeks (depending on the product use-by-date) after purchase. Data fitting showed the exponential distribution was the most appropriate distribution to describe the time that food spent at consumer's place. The storage time was described by an exponential distribution corresponding to the use-by date period divided by 4. In conclusion, knowing that collecting data is time and money consuming, in the absence of data, and at least for the European market and for refrigerated products, building a domestic refrigerator temperature distribution using a Normal law and a time-to-consumption distribution using an Exponential law would be appropriate. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Determination of the functioning parameters in asymmetrical flow field-flow fractionation with an exponential channel.

    PubMed

    Déjardin, P

    2013-08-30

    The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    NASA Astrophysics Data System (ADS)

    Jarvis, A.; Jarvis, S.; Hewitt, N.

    2015-01-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end use. With respect to energy, growth has been near exponential for the last 160 years. We attempt to show that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near optimal directed networks. If so, the distribution efficiencies of these networks must decline as they expand due to path lengths becoming longer and more tortuous. To maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system: namely at the points of acquisition and end use. We postulate that the maintenance of growth at the specific rate of ~2.4% yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  15. Plume characteristics of MPD thrusters: A preliminary examination

    NASA Technical Reports Server (NTRS)

    Myers, Roger M.

    1989-01-01

    A diagnostics facility for MPD thruster plume measurements was built and is currently undergoing testing. The facility includes electrostatic probes for electron temperature and density measurements, Hall probes for magnetic field and current distribution mapping, and an imaging system to establish the global distribution of plasma species. Preliminary results for MPD thrusters operated at power levels between 30 and 60 kW with solenoidal applied magnetic fields show that the electron density decreases exponentially from 1x10(2) to 2x10(18)/cu m over the first 30 cm of the expansion, while the electron temperature distribution is relatively uniform, decreasing from approximately 2.5 eV to 1.5 eV over the same distance. The radiant intensity of the ArII 4879 A line emission also decays exponentially. Current distribution measurements indicate that a significant fraction of the discharge current is blown into the plume region, and that its distribution depends on the magnitudes of both the discharge current and the applied magnetic field.

  16. Magnetic pattern at supergranulation scale: the void size distribution

    NASA Astrophysics Data System (ADS)

    Berrilli, F.; Scardigli, S.; Del Moro, D.

    2014-08-01

    The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.

  17. Optical absorption, TL and IRSL of basic plagioclase megacrysts from the pinacate (Sonora, Mexico) quaternary alkalic volcanics.

    PubMed

    Chernov, V; Paz-Moreno, F; Piters, T M; Barboza-Flores, M

    2006-01-01

    The paper presents the first results of an investigation on optical absorption (OA), thermally and infrared stimulated luminescence (TL and IRSL) of the Pinacate plagioclase (labradorite). The OA spectra reveal two bands with maxima at 1.0 and 3.2 eV connected with absorption of the Fe3+ and Fe2+ and IR absorption at wavelengths longer than 2700 nm. The ultraviolet absorption varies exponentially with the photon energy following the 'vitreous' empirical Urbach rule indicating exponential distribution of localised states in the forbidden band. The natural TL is peaked at 700 K. Laboratory beta irradiation creates a very broad TL peak with maximum at 430 K. The change of the 430 K TL peak shape under the thermal cleaning procedure and dark storage after irradiation reveals a monotonous increasing of the activation energy that can be explained by the exponential distribution of traps. The IRSL response is weak and exhibits a typical decay behaviour.

  18. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  19. Determination of bulk and interface density of states in metal oxide semiconductor thin-film transistors by using capacitance-voltage characteristics

    NASA Astrophysics Data System (ADS)

    Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai

    2017-10-01

    A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.

  20. Distinguishing response conflict and task conflict in the Stroop task: evidence from ex-Gaussian distribution analysis.

    PubMed

    Steinhauser, Marco; Hübner, Ronald

    2009-10-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  1. PSA doubling time of prostate carcinoma managed with watchful observation alone.

    PubMed

    Choo, R; DeBoer, G; Klotz, L; Danjoux, C; Morton, G C; Rakovitch, E; Fleshner, N; Bunting, P; Kapusta, L; Hruby, G

    2001-07-01

    To study prostate-specific antigen (PSA) doubling time of untreated, favorable grade, prostate carcinoma. A prospective single-arm cohort study has been in progress to assess the feasibility of a watchful observation protocol with selective delayed intervention using clinical, histologic, or PSA progression as treatment indication in untreated, localized, favorable grade prostate adenocarcinoma (T1b-T2bN0 M0, Gleason Score < or = 7, and PSA < or = 15 ng/mL). Patients are conservatively managed with watchful observation alone, as long as they do not meet the arbitrarily defined disease progression criteria. Patients are followed regularly and undergo blood tests including PSA at each visit. PSA doubling time (Td) is estimated from a linear regression of ln(PSA) on time, assuming a simple exponential growth model. As of March 2000, 134 patients have been on the study for a minimum of 12 months (median, 24; range, 12-52) and have a median frequency of PSA measurement of 7 times (range, 3-15). Median age is 70 years. Median PSA at enrollment is 6.3 (range, 0.5-14.6). The distribution of Td is as follows: <2 years, 19 patients; 2-5 years, 46; 5-10 years, 25; 10-20 years, 11; 20-50 years, 6; > 50 years, 27. The median Td is 5.1 years. In 44 patients (33%), Td is greater than 10 years. There was no correlation between Td and patient age, clinical T stage, Gleason score, or initial PSA level. Td of untreated prostate cancer varies widely. In our cohort, 33% have Td > 10 years. Td may be a useful tool to guide treatment intervention for patients managed conservatively with watchful observation alone.

  2. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.

  3. Exploiting the Adaptation Dynamics to Predict the Distribution of Beneficial Fitness Effects

    PubMed Central

    2016-01-01

    Adaptation of asexual populations is driven by beneficial mutations and therefore the dynamics of this process, besides other factors, depends on the distribution of beneficial fitness effects. It is known that on uncorrelated fitness landscapes, this distribution can only be of three types: truncated, exponential and power law. We performed extensive stochastic simulations to study the adaptation dynamics on rugged fitness landscapes, and identified two quantities that can be used to distinguish the underlying distribution of beneficial fitness effects. The first quantity studied here is the fitness difference between successive mutations that spread in the population, which is found to decrease in the case of truncated distributions, remains nearly a constant for exponentially decaying distributions and increases when the fitness distribution decays as a power law. The second quantity of interest, namely, the rate of change of fitness with time also shows quantitatively different behaviour for different beneficial fitness distributions. The patterns displayed by the two aforementioned quantities are found to hold good for both low and high mutation rates. We discuss how these patterns can be exploited to determine the distribution of beneficial fitness effects in microbial experiments. PMID:26990188

  4. Escape driven by alpha-stable white noises.

    PubMed

    Dybiec, B; Gudowska-Nowak, E; Hänggi, P

    2007-02-01

    We explore the archetype problem of an escape dynamics occurring in a symmetric double well potential when the Brownian particle is driven by white Lévy noise in a dynamical regime where inertial effects can safely be neglected. The behavior of escaping trajectories from one well to another is investigated by pointing to the special character that underpins the noise-induced discontinuity which is caused by the generalized Brownian paths that jump beyond the barrier location without actually hitting it. This fact implies that the boundary conditions for the mean first passage time (MFPT) are no longer determined by the well-known local boundary conditions that characterize the case with normal diffusion. By numerically implementing properly the set up boundary conditions, we investigate the survival probability and the average escape time as a function of the corresponding Lévy white noise parameters. Depending on the value of the skewness beta of the Lévy noise, the escape can either become enhanced or suppressed: a negative asymmetry parameter beta typically yields a decrease for the escape rate while the rate itself depicts a non-monotonic behavior as a function of the stability index alpha that characterizes the jump length distribution of Lévy noise, exhibiting a marked discontinuity at alpha=1. We find that the typical factor of 2 that characterizes for normal diffusion the ratio between the MFPT for well-bottom-to-well-bottom and well-bottom-to-barrier-top no longer holds true. For sufficiently high barriers the survival probabilities assume an exponential behavior versus time. Distinct non-exponential deviations occur, however, for low barrier heights.

  5. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    NASA Astrophysics Data System (ADS)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  6. A non-Gaussian option pricing model based on Kaniadakis exponential deformation

    NASA Astrophysics Data System (ADS)

    Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara

    2017-09-01

    A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.

  7. Channel length dependence of negative-bias-illumination-stress in amorphous-indium-gallium-zinc-oxide thin-film transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Um, Jae Gwang; Mativenga, Mallory; Jang, Jin, E-mail: jjang@khu.ac.kr

    2015-06-21

    We have investigated the dependence of Negative-Bias-illumination-Stress (NBIS) upon channel length, in amorphous-indium-gallium-zinc-oxide (a-IGZO) thin-film transistors (TFTs). The negative shift of the transfer characteristic associated with NBIS decreases for increasing channel length and is practically suppressed in devices with L = 100-μm. The effect is consistent with creation of donor defects, mainly in the channel regions adjacent to source and drain contacts. Excellent agreement with experiment has been obtained by an analytical treatment, approximating the distribution of donors in the active layer by a double exponential with characteristic length L{sub D} ∼ L{sub n} ∼ 10-μm, the latter being the electron diffusion length. The model alsomore » shows that a device with a non-uniform doping distribution along the active layer is in all equivalent, at low drain voltages, to a device with the same doping averaged over the active layer length. These results highlight a new aspect of the NBIS mechanism, that is, the dependence of the effect upon the relative magnitude of photogenerated holes and electrons, which is controlled by the device potential/band profile. They may also provide the basis for device design solutions to minimize NBIS.« less

  8. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  9. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  10. Numerical analysis of the unintegrated double gluon distribution

    NASA Astrophysics Data System (ADS)

    Elias, Edgar; Golec-Biernat, Krzysztof; Staśto, Anna M.

    2018-01-01

    We present detailed numerical analysis of the unintegrated double gluon distribution which includes the dependence on the transverse momenta of partons. The unintegrated double gluon distribution was obtained following the Kimber-Martin-Ryskin method as a convolution of the perturbative gluon splitting function with the collinear integrated double gluon distribution and the Sudakov form factors. We analyze the dependence on the transverse momenta, longitudinal momentum fractions and hard scales. We find that the unintegrated gluon distribution factorizes into a product of two single unintegrated gluon distributions in the region of small values of x, provided the splitting contribution is included and the momentum sum rule is satisfied.

  11. Atmospheric Transmittance/Radiance: Computer Code LOWTRAN 6

    DTIC Science & Technology

    1983-08-01

    1966) The refractive index of air, Metrologia 2:12, ൞ -1...sight. For an optical path traversing N layers in an upward or downward direction this process gives N [ 7 A+M A -SCAT --SUN I e,ps+op > AIV < La ... a...for cirrus normal transmittance, r, of the form = exp - (0. 14 LA ) (49) This expression closely duplicates the double exponential model of Davis 4 0 for

  12. Kinetics of DNA Tile Dimerization

    PubMed Central

    2015-01-01

    Investigating how individual molecular components interact with one another within DNA nanoarchitectures, both in terms of their spatial and temporal interactions, is fundamentally important for a better understanding of their physical behaviors. This will provide researchers with valuable insight for designing more complex higher-order structures that can be assembled more efficiently. In this report, we examined several spatial factors that affect the kinetics of bivalent, double-helical (DH) tile dimerization, including the orientation and number of sticky ends (SEs), the flexibility of the double helical domains, and the size of the tiles. The rate constants we obtained confirm our hypothesis that increased nucleation opportunities and well-aligned SEs accelerate tile–tile dimerization. Increased flexibility in the tiles causes slower dimerization rates, an effect that can be reversed by introducing restrictions to the tile flexibility. The higher dimerization rates of more rigid tiles results from the opposing effects of higher activation energies and higher pre-exponential factors from the Arrhenius equation, where the pre-exponential factor dominates. We believe that the results presented here will assist in improved implementation of DNA tile based algorithmic self-assembly, DNA based molecular robotics, and other specific nucleic acid systems, and will provide guidance to design and assembly processes to improve overall yield and efficiency. PMID:24794259

  13. Kinetics of DNA tile dimerization.

    PubMed

    Jiang, Shuoxing; Yan, Hao; Liu, Yan

    2014-06-24

    Investigating how individual molecular components interact with one another within DNA nanoarchitectures, both in terms of their spatial and temporal interactions, is fundamentally important for a better understanding of their physical behaviors. This will provide researchers with valuable insight for designing more complex higher-order structures that can be assembled more efficiently. In this report, we examined several spatial factors that affect the kinetics of bivalent, double-helical (DH) tile dimerization, including the orientation and number of sticky ends (SEs), the flexibility of the double helical domains, and the size of the tiles. The rate constants we obtained confirm our hypothesis that increased nucleation opportunities and well-aligned SEs accelerate tile-tile dimerization. Increased flexibility in the tiles causes slower dimerization rates, an effect that can be reversed by introducing restrictions to the tile flexibility. The higher dimerization rates of more rigid tiles results from the opposing effects of higher activation energies and higher pre-exponential factors from the Arrhenius equation, where the pre-exponential factor dominates. We believe that the results presented here will assist in improved implementation of DNA tile based algorithmic self-assembly, DNA based molecular robotics, and other specific nucleic acid systems, and will provide guidance to design and assembly processes to improve overall yield and efficiency.

  14. Compensation of strong thermal lensing in high-optical-power cavities.

    PubMed

    Zhao, C; Degallaix, J; Ju, L; Fan, Y; Blair, D G; Slagmolen, B J J; Gray, M B; Lowry, C M Mow; McClelland, D E; Hosken, D J; Mudge, D; Brooks, A; Munch, J; Veitch, P J; Barton, M A; Billingsley, G

    2006-06-16

    In an experiment to simulate the conditions in high optical power advanced gravitational wave detectors, we show for the first time that the time evolution of strong thermal lenses follows the predicted infinite sum of exponentials (approximated by a double exponential), and that such lenses can be compensated using an intracavity compensation plate heated on its cylindrical surface. We show that high finesse approximately 1400 can be achieved in cavities with internal compensation plates, and that mode matching can be maintained. The experiment achieves a wave front distortion similar to that expected for the input test mass substrate in the Advanced Laser Interferometer Gravitational Wave Observatory, and shows that thermal compensation schemes are viable. It is also shown that the measurements allow a direct measurement of substrate optical absorption in the test mass and the compensation plate.

  15. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    NASA Astrophysics Data System (ADS)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  16. Empirical analysis of individual popularity and activity on an online music service system

    NASA Astrophysics Data System (ADS)

    Hu, Hai-Bo; Han, Ding-Yi

    2008-10-01

    Quantitative understanding of human behaviors supplies basic comprehension of the dynamics of many socio-economic systems. Based on the log data of an online music service system, we investigate the statistical characteristics of individual activity and popularity, and find that the distributions of both of them follow a stretched exponential form which interpolates between exponential and power law distribution. We also study the human dynamics on the online system and find that the distribution of interevent time between two consecutive listenings of music shows the fat tail feature. Besides, with the reduction of user activity the fat tail becomes more and more irregular, indicating different behavior patterns for users with diverse activities. The research results may shed some light on the in-depth understanding of collective behaviors in socio-economic systems.

  17. Recursive least squares estimation and its application to shallow trench isolation

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.

    2003-06-01

    In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.

  18. A partial exponential lumped parameter model to evaluate groundwater age distributions and nitrate trends in long-screened wells

    USGS Publications Warehouse

    Jurgens, Bryant; Böhlke, John Karl; Kauffman, Leon J.; Belitz, Kenneth; Esser, Bradley K.

    2016-01-01

    A partial exponential lumped parameter model (PEM) was derived to determine age distributions and nitrate trends in long-screened production wells. The PEM can simulate age distributions for wells screened over any finite interval of an aquifer that has an exponential distribution of age with depth. The PEM has 3 parameters – the ratio of saturated thickness to the top and bottom of the screen and mean age, but these can be reduced to 1 parameter (mean age) by using well construction information and estimates of the saturated thickness. The PEM was tested with data from 30 production wells in a heterogeneous alluvial fan aquifer in California, USA. Well construction data were used to guide parameterization of a PEM for each well and mean age was calibrated to measured environmental tracer data (3H, 3He, CFC-113, and 14C). Results were compared to age distributions generated for individual wells using advective particle tracking models (PTMs). Age distributions from PTMs were more complex than PEM distributions, but PEMs provided better fits to tracer data, partly because the PTMs did not simulate 14C accurately in wells that captured varying amounts of old groundwater recharged at lower rates prior to groundwater development and irrigation. Nitrate trends were simulated independently of the calibration process and the PEM provided good fits for at least 11 of 24 wells. This work shows that the PEM, and lumped parameter models (LPMs) in general, can often identify critical features of the age distributions in wells that are needed to explain observed tracer data and nonpoint source contaminant trends, even in systems where aquifer heterogeneity and water-use complicate distributions of age. While accurate PTMs are preferable for understanding and predicting aquifer-scale responses to water use and contaminant transport, LPMs can be sensitive to local conditions near individual wells that may be inaccurately represented or missing in an aquifer-scale flow model.

  19. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations tomore » the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of RM-step thickness is required for accurate parameterization of the effective SAD. The GBD energy spread is given by a linear function of the exponential of the beam energy. Except for a few outliers, the measured parameters match the GBD within the specified tolerances in all of the four rooms investigated. For a SOBP field with a range of 15 g/cm{sup 2} and an air gap of 25 cm, the maximum difference in the 80%–20% lateral penumbra between the GBD-commissioned treatment-planning system and measurements in any of the four rooms is 0.5 mm. Conclusions: The beam model parameters of the double-scattering system can be parameterized with a limited set of equations and parameters. This GBD closely matches the measured dosimetric properties in four different rooms.« less

  20. Improving Bed Management at Wright-Patterson Medical Center

    DTIC Science & Technology

    1989-09-01

    arrival distributions are Poisson, as in Sim2, then interarrival times are distributed exponentially (Budnick, Mcleavey , and Mojena, 1988:770). While... McLeavey , D. and Mojena R., Principles of Operations Research for Management (second edition). Homewood IL: Irwin, 1988. Cannoodt, L. J. and

  1. Compact continuous-variable entanglement distillation.

    PubMed

    Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A

    2012-02-10

    We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.

  2. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    PubMed

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  3. The Comparison Study of Quadratic Infinite Beam Program on Optimization Instensity Modulated Radiation Therapy Treatment Planning (IMRTP) between Threshold and Exponential Scatter Method with CERR® In The Case of Lung Cancer

    NASA Astrophysics Data System (ADS)

    Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.

    2016-08-01

    This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.

  4. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  5. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  6. Very slow lava extrusion continued for more than five years after the 2011 Shinmoedake eruption observed from SAR interferometry

    NASA Astrophysics Data System (ADS)

    Ozawa, T.; Miyagi, Y.

    2017-12-01

    Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.

  7. Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue

    PubMed Central

    Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.

    2004-01-01

    The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455

  8. The social architecture of capitalism

    NASA Astrophysics Data System (ADS)

    Wright, Ian

    2005-02-01

    A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.

  9. Network structures sustained by internal links and distributed lifetime of old nodes in stationary state of number of nodes

    NASA Astrophysics Data System (ADS)

    Ikeda, Nobutoshi

    2017-12-01

    In network models that take into account growth properties, deletion of old nodes has a serious impact on degree distributions, because old nodes tend to become hub nodes. In this study, we aim to provide a simple explanation for why hubs can exist even in conditions where the number of nodes is stationary due to the deletion of old nodes. We show that an exponential increase in the degree of nodes is a natural consequence of the balance between the deletion and addition of nodes as long as a preferential attachment mechanism holds. As a result, the largest degree is determined by the magnitude relationship between the time scale of the exponential growth of degrees and lifetime of old nodes. The degree distribution exhibits a power-law form ˜ k -γ with exponent γ = 1 when the lifetime of nodes is constant. However, various values of γ can be realized by introducing distributed lifetime of nodes.

  10. The Modelled Raindrop Size Distribution of Skudai, Peninsular Malaysia, Using Exponential and Lognormal Distributions

    PubMed Central

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597

  11. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  12. Improved Results for Route Planning in Stochastic Transportation Networks

    NASA Technical Reports Server (NTRS)

    Boyan, Justin; Mitzenmacher, Michael

    2000-01-01

    In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.

  13. Numerical analysis of spectral properties of coupled oscillator Schroedinger operators. I - Single and double well anharmonic oscillators

    NASA Technical Reports Server (NTRS)

    Isaacson, D.; Isaacson, E. L.; Paes-Leme, P. J.; Marchesin, D.

    1981-01-01

    Several methods for computing many eigenvalues and eigenfunctions of a single anharmonic oscillator Schroedinger operator whose potential may have one or two minima are described. One of the methods requires the solution of an ill-conditioned generalized eigenvalue problem. This method has the virtue of using a bounded amount of work to achieve a given accuracy in both the single and double well regions. Rigorous bounds are given, and it is proved that the approximations converge faster than any inverse power of the size of the matrices needed to compute them. The results of computations for the g:phi(4):1 theory are presented. These results indicate that the methods actually converge exponentially fast.

  14. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  15. Research on the exponential growth effect on network topology: Theoretical and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Shouwei; You, Zongjun

    Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.

  16. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  17. A mathematical model for generating bipartite graphs and its application to protein networks

    NASA Astrophysics Data System (ADS)

    Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.

    2009-12-01

    Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.

  18. Analysis and modeling of optical crosstalk in InP-based Geiger-mode avalanche photodiode FPAs

    NASA Astrophysics Data System (ADS)

    Chau, Quan; Jiang, Xudong; Itzler, Mark A.; Entwistle, Mark; Piccione, Brian; Owens, Mark; Slomkowski, Krystyna

    2015-05-01

    Optical crosstalk is a major factor limiting the performance of Geiger-mode avalanche photodiode (GmAPD) focal plane arrays (FPAs). This is especially true for arrays with increased pixel density and broader spectral operation. We have performed extensive experimental and theoretical investigations on the crosstalk effects in InP-based GmAPD FPAs for both 1.06-μm and 1.55-μm applications. Mechanisms responsible for intrinsic dark counts are Poisson processes, and their inter-arrival time distribution is an exponential function. In FPAs, intrinsic dark counts and cross talk events coexist, and the inter-arrival time distribution deviates from purely exponential behavior. From both experimental data and computer simulations, we show the dependence of this deviation on the crosstalk probability. The spatial characteristics of crosstalk are also demonstrated. From the temporal and spatial distribution of crosstalk, an efficient algorithm to identify and quantify crosstalk is introduced.

  19. Preferential attachment and growth dynamics in complex systems

    NASA Astrophysics Data System (ADS)

    Yamasaki, Kazuko; Matia, Kaushik; Buldyrev, Sergey V.; Fu, Dongfeng; Pammolli, Fabio; Riccaboni, Massimo; Stanley, H. Eugene

    2006-09-01

    Complex systems can be characterized by classes of equivalency of their elements defined according to system specific rules. We propose a generalized preferential attachment model to describe the class size distribution. The model postulates preferential growth of the existing classes and the steady influx of new classes. According to the model, the distribution changes from a pure exponential form for zero influx of new classes to a power law with an exponential cut-off form when the influx of new classes is substantial. Predictions of the model are tested through the analysis of a unique industrial database, which covers both elementary units (products) and classes (markets, firms) in a given industry (pharmaceuticals), covering the entire size distribution. The model’s predictions are in good agreement with the data. The paper sheds light on the emergence of the exponent τ≈2 observed as a universal feature of many biological, social and economic problems.

  20. Science and Facebook: The same popularity law!

    PubMed

    Néda, Zoltán; Varga, Levente; Biró, Tamás S

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of "shares" for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4.

  1. Distributed Consensus of Stochastic Delayed Multi-agent Systems Under Asynchronous Switching.

    PubMed

    Wu, Xiaotai; Tang, Yang; Cao, Jinde; Zhang, Wenbing

    2016-08-01

    In this paper, the distributed exponential consensus of stochastic delayed multi-agent systems with nonlinear dynamics is investigated under asynchronous switching. The asynchronous switching considered here is to account for the time of identifying the active modes of multi-agent systems. After receipt of confirmation of mode's switching, the matched controller can be applied, which means that the switching time of the matched controller in each node usually lags behind that of system switching. In order to handle the coexistence of switched signals and stochastic disturbances, a comparison principle of stochastic switched delayed systems is first proved. By means of this extended comparison principle, several easy to verified conditions for the existence of an asynchronously switched distributed controller are derived such that stochastic delayed multi-agent systems with asynchronous switching and nonlinear dynamics can achieve global exponential consensus. Two examples are given to illustrate the effectiveness of the proposed method.

  2. Science and Facebook: The same popularity law!

    PubMed Central

    Varga, Levente; Biró, Tamás S.

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of “shares” for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4. PMID:28678796

  3. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  4. Multi-step rhodopsin inactivation schemes can account for the size variability of single photon responses in Limulus ventral photoreceptors

    PubMed Central

    1994-01-01

    Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085

  5. Kinetics of rapid covalent bond formation of aniline with humic acid: ESR investigations with nitroxide spin labels

    NASA Astrophysics Data System (ADS)

    Glinka, Kevin; Matthies, Michael; Theiling, Marius; Hideg, Kalman; Steinhoff, Heinz-Jürgen

    2016-04-01

    Sulfonamide antibiotics used in livestock farming are distributed to farmland by application of slurry as fertilizer. Previous work suggests rapid covalent binding of the aniline moiety to humic acids found in soil. In the current work, kinetics of this binding were measured in X-band EPR spectroscopy by incubating Leonardite humic acid (LHA) with a paramagnetic aniline spin label (anilino-NO (2,5,5-Trimethyl-2-(3-aminophenyl)pyrrolidin-1-oxyl)). Binding was detected by a pronounced broadening of the spectral lines after incubation of LHA with anilino-NO. The time evolution of the amplitude of this feature was used for determining the reaction kinetics. Single- and double-exponential models were fitted to the data obtained for modelling one or two first-order reactions. Reaction rates of 0.16 min-1 and 0.012 min-1, were found respectively. Addition of laccase peroxidase did not change the kinetics but significantly enhanced the reacting fraction of anilino-NO. This EPR-based method provides a technically simple and effective method for following rapid binding processes of a xenobiotic substance to humic acids.

  6. Quantification of cellular autofluorescence of human skin using multiphoton tomography and fluorescence lifetime imaging in two spectral detection channels

    PubMed Central

    Patalay, Rakesh; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Neil, Mark A. A.; König, Karsten; French, Paul M. W.; Chu, Anthony; Stamp, Gordon W.; Dunsby, Chris

    2011-01-01

    We explore the diagnostic potential of imaging endogenous fluorophores using two photon microscopy and fluorescence lifetime imaging (FLIM) in human skin with two spectral detection channels. Freshly excised benign dysplastic nevi (DN) and malignant nodular Basal Cell Carcinomas (nBCCs) were excited at 760 nm. The resulting fluorescence signal was binned manually on a cell by cell basis. This improved the reliability of fitting using a double exponential decay model and allowed the fluorescence signatures from different cell populations within the tissue to be identified and studied. We also performed a direct comparison between different diagnostic groups. A statistically significant difference between the median mean fluorescence lifetime of 2.79 ns versus 2.52 ns (blue channel, 300-500 nm) and 2.08 ns versus 1.33 ns (green channel, 500-640 nm) was found between nBCCs and DN respectively, using the Mann-Whitney U test (p < 0.01). Further differences in the distribution of fluorescence lifetime parameters and inter-patient variability are also discussed. PMID:22162820

  7. Exponential Family Functional data analysis via a low-rank model.

    PubMed

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  8. Combining Orthogonal Chain-End Deprotections and Thiol-Maleimide Michael Coupling: Engineering Discrete Oligomers by an Iterative Growth Strategy.

    PubMed

    Huang, Zhihao; Zhao, Junfei; Wang, Zimu; Meng, Fanying; Ding, Kunshan; Pan, Xiangqiang; Zhou, Nianchen; Li, Xiaopeng; Zhang, Zhengbiao; Zhu, Xiulin

    2017-10-23

    Orthogonal maleimide and thiol deprotections were combined with thiol-maleimide coupling to synthesize discrete oligomers/macromolecules on a gram scale with molecular weights up to 27.4 kDa (128mer, 7.9 g) using an iterative exponential growth strategy with a degree of polymerization (DP) of 2 n -1. Using the same chemistry, a "readable" sequence-defined oligomer and a discrete cyclic topology were also created. Furthermore, uniform dendrons were fabricated using sequential growth (DP=2 n -1) or double exponential dendrimer growth approaches (DP=22n -1) with significantly accelerated growth rates. A versatile, efficient, and metal-free method for construction of discrete oligomers with tailored structures and a high growth rate would greatly facilitate research into the structure-property relationships of sophisticated polymeric materials. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Ultrafast hole carrier relaxation dynamics in p-type CuO nanowires

    PubMed Central

    2011-01-01

    Ultrafast hole carrier relaxation dynamics in CuO nanowires have been investigated using transient absorption spectroscopy. Following femtosecond pulse excitation in a non-collinear pump-probe configuration, a combination of non-degenerate transmission and reflection measurements reveal initial ultrafast state filling dynamics independent of the probing photon energy. This behavior is attributed to the occupation of states by photo-generated carriers in the intrinsic hole region of the p-type CuO nanowires located near the top of the valence band. Intensity measurements indicate an upper fluence threshold of 40 μJ/cm2 where carrier relaxation is mainly governed by the hole dynamics. The fast relaxation of the photo-generated carriers was determined to follow a double exponential decay with time constants of 0.4 ps and 2.1 ps. Furthermore, time-correlated single photon counting measurements provide evidence of three exponential relaxation channels on the nanosecond timescale. PMID:22151927

  10. Scalable synthesis of sequence-defined, unimolecular macromolecules by Flow-IEG

    PubMed Central

    Leibfarth, Frank A.; Johnson, Jeremiah A.; Jamison, Timothy F.

    2015-01-01

    We report a semiautomated synthesis of sequence and architecturally defined, unimolecular macromolecules through a marriage of multistep flow synthesis and iterative exponential growth (Flow-IEG). The Flow-IEG system performs three reactions and an in-line purification in a total residence time of under 10 min, effectively doubling the molecular weight of an oligomeric species in an uninterrupted reaction sequence. Further iterations using the Flow-IEG system enable an exponential increase in molecular weight. Incorporating a variety of monomer structures and branching units provides control over polymer sequence and architecture. The synthesis of a uniform macromolecule with a molecular weight of 4,023 g/mol is demonstrated. The user-friendly nature, scalability, and modularity of Flow-IEG provide a general strategy for the automated synthesis of sequence-defined, unimolecular macromolecules. Flow-IEG is thus an enabling tool for theory validation, structure–property studies, and advanced applications in biotechnology and materials science. PMID:26269573

  11. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  12. Pore‐Scale Hydrodynamics in a Progressively Bioclogged Three‐Dimensional Porous Medium: 3‐D Particle Tracking Experiments and Stochastic Transport Modeling

    PubMed Central

    Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.

    2018-01-01

    Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184

  13. Pore-Scale Hydrodynamics in a Progressively Bioclogged Three-Dimensional Porous Medium: 3-D Particle Tracking Experiments and Stochastic Transport Modeling

    NASA Astrophysics Data System (ADS)

    Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2018-03-01

    Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.

  14. Cross diffusion and exponential space dependent heat source impacts in radiated three-dimensional (3D) flow of Casson fluid by heated surface

    NASA Astrophysics Data System (ADS)

    Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.

    2018-03-01

    This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.

  15. Global exponential stability and lag synchronization for delayed memristive fuzzy Cohen-Grossberg BAM neural networks with impulses.

    PubMed

    Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar

    2018-02-01

    This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times

    NASA Astrophysics Data System (ADS)

    Fa, Kwok Sau

    2012-09-01

    In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.

  17. Strain, curvature, and twist measurements in digital holographic interferometry using pseudo-Wigner-Ville distribution based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2009-09-15

    Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less

  18. Two-state Markov-chain Poisson nature of individual cellphone call statistics

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Zhou, Wei-Xing; Sornette, Didier

    2016-07-01

    Unfolding the burst patterns in human activities and social interactions is a very important issue especially for understanding the spreading of disease and information and the formation of groups and organizations. Here, we conduct an in-depth study of the temporal patterns of cellphone conversation activities of 73 339 anonymous cellphone users, whose inter-call durations are Weibull distributed. We find that the individual call events exhibit a pattern of bursts, that high activity periods are alternated with low activity periods. In both periods, the number of calls are exponentially distributed for individuals, but power-law distributed for the population. Together with the exponential distributions of inter-call durations within bursts and of the intervals between consecutive bursts, we demonstrate that the individual call activities are driven by two independent Poisson processes, which can be combined within a minimal model in terms of a two-state first-order Markov chain, giving significant fits for nearly half of the individuals. By measuring directly the distributions of call rates across the population, which exhibit power-law tails, we purport the existence of power-law distributions, via the ‘superposition of distributions’ mechanism. Our findings shed light on the origins of bursty patterns in other human activities.

  19. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  20. Heterogeneous Link Weight Promotes the Cooperation in Spatial Prisoner's Dilemma

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Qin; Xia, Cheng-Yi; Sun, Shi-Wen; Wang, Li; Wang, Huai-Bin; Wang, Juan

    The spatial structure has often been identified as a prominent mechanism that substantially promotes the cooperation level in prisoner's dilemma game. In this paper we introduce a weighting mechanism into the spatial prisoner's dilemma game to explore the cooperative behaviors on the square lattice. Here, three types of weight distributions: exponential, power-law and uniform distributions are considered, and the weight is assigned to links between players. Through large-scale numerical simulations we find, compared with the traditional spatial game, that this mechanism can largely enhance the frequency of cooperators. For most ranges of b, we find that the power-law distribution enables the highest promotion of cooperation and the uniform one leads to the lowest enhancement, whereas the exponential one lies often between them. The great improvement of cooperation can be caused by the fact that the distributional link weight yields inhomogeneous interaction strength among individuals, which can facilitate the formation of cooperative clusters to resist the defector's invasion. In addition, the impact of amplitude of the undulation of weight distribution and noise strength on cooperation is also investigated for three kinds of weight distribution. Current researches can aid in the further understanding of evolutionary cooperation in biological and social science.

  1. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  2. System Lifetimes, The Memoryless Property, Euler's Constant, and Pi

    ERIC Educational Resources Information Center

    Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon

    2013-01-01

    A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…

  3. The Spin-down of Swift J1822.3-1606: A New Galactic Magnetar

    NASA Astrophysics Data System (ADS)

    Livingstone, M. A.; Scholz, P.; Kaspi, V. M.; Ng, C.-Y.; Gavriil, Fotis P.

    2011-12-01

    On 2011 July 14, a new magnetar candidate, Swift J1822.3-1606, was identified via a rate trigger on the Swift/Burst Alert Telescope. Here we present an initial analysis of the X-ray properties of the source, using data from the Rossi X-ray Timing Explorer, Swift, and the Chandra X-ray Observatory, spanning 2011 July 16-October 8. We measure a precise spin period of P = 8.43771968(6) s and a spin-down rate of \\dot{P}=2.54(22)\\times 10^{-13}, at MJD 55761.0, corresponding to an inferred surface dipole magnetic field of B = 4.7(2) × 1013 G, the second lowest thus far measured for a magnetar, though similar to those of 1E 2259+586 and several high-magnetic field radio pulsars. We show that the flux decay in the 1-10 keV band is best fit by a double exponential with timescales of 9 ± 1 and 55 ± 9 days. The pulsed count rate decay in the 2-10 keV band, by contrast, is better fit by a single exponential decay with timescale 15.9 ± 0.2 days. After increasing from ~35% for ~20 days after the onset of the outburst, the pulsed fraction in the 2-10 keV band remained constant at ~45%. We argue that these properties confirm this source to be a new member of the class of objects known as magnetars. We consider the distribution of magnetar periods and inferred dipole magnetic field strengths, showing that the former appears flat in the 2-12 s range, while the latter appears peaked in the 1014-1015 G range.

  4. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  5. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346

  6. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.

  7. Calculating Formulas of Coefficient and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, F. H.; Zhou, G. D.; Ma, K.; Ma, W. J.; Cui, W. Y.; Zhang, B.

    2015-11-01

    Present studies have shown that, in the main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the distributions of neutron exposures in the nucleosynthesis regions can all be expressed by an exponential function ({ρ_{AGB}}(τ) = C/{τ_0}exp ( - τ/{τ_0})) in the effective range of values. However, the specific expressions of the proportional coefficient C and the mean neutron exposure ({τ_0}) in the formula for different models are not completely determined in the related literatures. Through dissecting the basic solving method of the exponential distribution of neutron exposures, and systematically combing the solution procedure of exposure distribution for different stellar models, the general calculating formulas as well as their auxiliary equations for calculating C and ({τ_0}) are reduced. Given the discrete distribution of neutron exposures ({P_k}), i.e. the mass ratio of the materials which have exposed to neutrons for (k) ((k = 0, 1, 2 \\cdots )) times when reaching the final distribution with respect to the materials of the He intershell, (C = - {P_1}/ln R), and ({τ_0} = - Δ τ /ln R) can be obtained. Here, (R) expresses the probability that the materials can successively experience neutron irradiation for two times in the He intershell. For the convective nucleosynthesis model (including the Ulrich model and the ({}^{13}{C})-pocket convective burning model), (R) is just the overlap factor r, namely the mass ratio of the materials which can undergo two successive thermal pulses in the He intershell. And for the (^{13}{C})-pocket radiative burning model, (R = sumlimits_{k = 1}^∞ {{P_k}} ). This set of formulas practically give the corresponding relationship between C or ({τ_0}) and the model parameters. The results of this study effectively solve the problem of analytically calculating the distribution of neutron exposures in the low-mass AGB star s-process nucleosynthesis model of (^{13}{C})-pocket radiative burning.

  8. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  9. Development and growth of fruit bodies and crops of the button mushroom, Agaricus bisporus.

    PubMed

    Straatsma, Gerben; Sonnenberg, Anton S M; van Griensven, Leo J L D

    2013-10-01

    We studied the appearance of fruit body primordia, the growth of individual fruit bodies and the development of the consecutive flushes of the crop. Relative growth, measured as cap expansion, was not constant. It started extremely rapidly, and slowed down to an exponential rate with diameter doubling of 1.7 d until fruit bodies showed maturation by veil breaking. Initially many outgrowing primordia were arrested, indicating nutritional competition. After reaching 10 mm diameter, no growth arrest occurred; all growing individuals, whether relatively large or small, showed an exponential increase of both cap diameter and biomass, until veil breaking. Biomass doubled in 0.8 d. Exponential growth indicates the absence of competition. Apparently there exist differential nutritional requirements for early growth and for later, continuing growth. Flushing was studied applying different picking sizes. An ordinary flushing pattern occurred at an immature picking size of 8 mm diameter (picking mushrooms once a day with a diameter above 8 mm). The smallest picking size yielded the highest number of mushrooms picked, confirming the competition and arrested growth of outgrowing primordia: competition seems less if outgrowing primordia are removed early. The flush duration (i.e. between the first and last picking moments) was not affected by picking size. At small picking size, the subsequent flushes were not fully separated in time but overlapped. Within 2 d after picking the first individuals of the first flush, primordia for the second flush started outgrowth. Our work supports the view that the acquisition of nutrients by the mycelium is demand rather than supply driven. For formation and early outgrowth of primordia, indications were found for an alternation of local and global control, at least in the casing layer. All these data combined, we postulate that flushing is the consequence of the depletion of some unknown specific nutrition required by outgrowing primordia. Copyright © 2013 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  10. On the origin of stretched exponential (Kohlrausch) relaxation kinetics in the room temperature luminescence decay of colloidal quantum dots.

    PubMed

    Bodunov, E N; Antonov, Yu A; Simões Gamboa, A L

    2017-03-21

    The non-exponential room temperature luminescence decay of colloidal quantum dots is often well described by a stretched exponential function. However, the physical meaning of the parameters of the function is not clear in the majority of cases reported in the literature. In this work, the room temperature stretched exponential luminescence decay of colloidal quantum dots is investigated theoretically in an attempt to identify the underlying physical mechanisms associated with the parameters of the function. Three classes of non-radiative transition processes between the excited and ground states of colloidal quantum dots are discussed: long-range resonance energy transfer, multiphonon relaxation, and contact quenching without diffusion. It is shown that multiphonon relaxation cannot explain a stretched exponential functional form of the luminescence decay while such dynamics of relaxation can be understood in terms of long-range resonance energy transfer to acceptors (molecules, quantum dots, or anharmonic molecular vibrations) in the environment of the quantum dots acting as energy-donors or by contact quenching by acceptors (surface traps or molecules) distributed statistically on the surface of the quantum dots. These non-radiative transition processes are assigned to different ranges of the stretching parameter β.

  11. Universal state-selective corrections to multireference coupled-cluster theories with single and double excitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; van Dam, Hubertus JJ; Pittner, Jiri

    2012-03-28

    The recently proposed Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] to approximate Multi-Reference Coupled Cluster (MRCC) energies can be commonly applied to any type of MRCC theory based on the Jeziorski-Monkhorst [B. Jeziorski, H.J. Monkhorst, Phys. Rev. A 24, 1668 (1981)] exponential Ansatz. In this letter we report on the performance of a simple USS correction to the Brillouin-Wigner MRCC (BW-MRCC) formalism employing single and double excitations (BW-MRCCSD). It is shown that the resulting formalism (USS-BW-MRCCSD), which uses the manifold of single and double excitations to construct the correction, can be related to a posteriorimore » corrections utilized in routine BW-MRCCSD calculations. In several benchmark calculations we compare the results of the USS-BW-MRCCSD method with results of the BW-MRCCSD approach employing a posteriori corrections and with results obtained with the Full Configuration Interaction (FCI) method.« less

  12. Application of a Short Intracellular pH Method to Flow Cytometry for Determining Saccharomyces cerevisiae Vitality ▿

    PubMed Central

    Weigert, Claudia; Steffler, Fabian; Kurz, Tomas; Shellhammer, Thomas H.; Methner, Frank-Jürgen

    2009-01-01

    The measurement of yeast's intracellular pH (ICP) is a proven method for determining yeast vitality. Vitality describes the condition or health of viable cells as opposed to viability, which defines living versus dead cells. In contrast to fluorescence photometric measurements, which show only average ICP values of a population, flow cytometry allows the presentation of an ICP distribution. By examining six repeated propagations with three separate growth phases (lag, exponential, and stationary), the ICP method previously established for photometry was transferred successfully to flow cytometry by using the pH-dependent fluorescent probe 5,6-carboxyfluorescein. The correlation between the two methods was good (r2 = 0.898, n = 18). With both methods it is possible to track the course of growth phases. Although photometry did not yield significant differences between exponentially and stationary phases (P = 0.433), ICP via flow cytometry did (P = 0.012). Yeast in an exponential phase has a unimodal ICP distribution, reflective of a homogeneous population; however, yeast in a stationary phase displays a broader ICP distribution, and subpopulations could be defined by using the flow cytometry method. In conclusion, flow cytometry yielded specific evidence of the heterogeneity in vitality of a yeast population as measured via ICP. In contrast to photometry, flow cytometry increases information about the yeast population's vitality via a short measurement, which is suitable for routine analysis. PMID:19581482

  13. Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time

    NASA Astrophysics Data System (ADS)

    Himeoka, Yusuke; Kaneko, Kunihiko

    2017-04-01

    The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.

  14. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  15. Accumulated distribution of material gain at dislocation crystal growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakin, V. I., E-mail: rakin@geo.komisc.ru

    2016-05-15

    A model for slowing down the tangential growth rate of an elementary step at dislocation crystal growth is proposed based on the exponential law of impurity particle distribution over adsorption energy. It is established that the statistical distribution of material gain on structurally equivalent faces obeys the Erlang law. The Erlang distribution is proposed to be used to calculate the occurrence rates of morphological combinatorial types of polyhedra, presenting real simple crystallographic forms.

  16. The double power law in human collaboration behavior: The case of Wikipedia

    NASA Astrophysics Data System (ADS)

    Kwon, Okyu; Son, Woo-Sik; Jung, Woo-Sung

    2016-11-01

    We study human behavior in terms of the inter-event time distribution of revision behavior on Wikipedia, an online collaborative encyclopedia. We observe a double power law distribution for the inter-editing behavior at the population level and a single power law distribution at the individual level. Although interactions between users are indirect or moderate on Wikipedia, we determine that the synchronized editing behavior among users plays a key role in determining the slope of the tail of the double power law distribution.

  17. Superionic state in double-layer capacitors with nanoporous electrodes.

    PubMed

    Kondrat, S; Kornyshev, A

    2011-01-19

    In recent experiments (Chmiola et al 2006 Science 313 1760; Largeot et al 2008 J. Am. Chem. Soc. 130 2730) an anomalous increase of the capacitance with a decrease of the pore size of a carbon-based porous electric double-layer capacitor has been observed. We explain this effect by image forces which exponentially screen out the electrostatic interactions of ions in the interior of a pore. Packing of ions of the same sign becomes easier and is mainly limited by steric interactions. We call this state 'superionic' and suggest a simple model to describe it. The model reveals the possibility of a voltage-induced first order transition between a cation(anion)-deficient phase and a cation(anion)-rich phase which manifests itself in a jump of capacitance as a function of voltage.

  18. Nocturnal Dynamics of Sleep-Wake Transitions in Patients With Narcolepsy.

    PubMed

    Zhang, Xiaozhe; Kantelhardt, Jan W; Dong, Xiao Song; Krefting, Dagmar; Li, Jing; Yan, Han; Pillmann, Frank; Fietze, Ingo; Penzel, Thomas; Zhao, Long; Han, Fang

    2017-02-01

    We investigate how characteristics of sleep-wake dynamics in humans are modified by narcolepsy, a clinical condition that is supposed to destabilize sleep-wake regulation. Subjects with and without cataplexy are considered separately. Differences in sleep scoring habits as a possible confounder have been examined. Four groups of subjects are considered: narcolepsy patients from China with (n = 88) and without (n = 15) cataplexy, healthy controls from China (n = 110) and from Europe (n = 187, 2 nights each). After sleep-stage scoring and calculation of sleep characteristic parameters, the distributions of wake-episode durations and sleep-episode durations are determined for each group and fitted by power laws (exponent α) and by exponentials (decay time τ). We find that wake duration distributions are consistent with power laws for healthy subjects (China: α = 0.88, Europe: α = 1.02). Wake durations in all groups of narcolepsy patients, however, follow the exponential law (τ = 6.2-8.1 min). All sleep duration distributions are best fitted by exponentials on long time scales (τ = 34-82 min). We conclude that narcolepsy mainly alters the control of wake-episode durations but not sleep-episode durations, irrespective of cataplexy. Observed distributions of shortest wake and sleep durations suggest that differences in scoring habits regarding the scoring of short-term sleep stages may notably influence the fitting parameters but do not affect the main conclusion. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  19. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation.

    PubMed

    Miao, Yinglong; Sinko, William; Pierce, Levi; Bucher, Denis; Walker, Ross C; McCammon, J Andrew

    2014-07-08

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20 k B T) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2-3 k B T). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼ k B T, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting "PyReweighting" is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/.

  20. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation

    PubMed Central

    2015-01-01

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20kBT) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2–3kBT). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼kBT, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting “PyReweighting” is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/. PMID:25061441

  1. Quantifying patterns of research interest evolution

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Wang, Dashun; Szymanski, Boleslaw

    Changing and shifting research interest is an integral part of a scientific career. Despite extensive investigations of various factors that influence a scientist's choice of research topics, quantitative assessments of mechanisms that give rise to macroscopic patterns characterizing research interest evolution of individual scientists remain limited. Here we perform a large-scale analysis of extensive publication records, finding that research interest change follows a reproducible pattern characterized by an exponential distribution. We identify three fundamental features responsible for the observed exponential distribution, which arise from a subtle interplay between exploitation and exploration in research interest evolution. We develop a random walk based model, which adequately reproduces our empirical observations. Our study presents one of the first quantitative analyses of macroscopic patterns governing research interest change, documenting a high degree of regularity underlying scientific research and individual careers.

  2. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  3. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Cell Division and Evolution of Biological Tissues

    NASA Astrophysics Data System (ADS)

    Rivier, Nicolas; Arcenegui-Siemens, Xavier; Schliecker, Gudrun

    A tissue is a geometrical, space-filling, random cellular network; it remains in this steady state while individual cells divide. Cell division (fragmentation) is a local, elementary topological transformation which establishes statistical equilibrium of the structure. Statistical equilibrium is characterized by observable relations (Lewis, Aboav) between cell shapes, sizes and those of their neighbours, obtained through maximum entropy and topological correlation extending to nearest neighbours only, i.e. maximal randomness. For a two-dimensional tissue (epithelium), the distribution of cell shapes and that of mother and daughter cells can be obtained from elementary geometrical and physical arguments, except for an exponential factor favouring division of larger cells, and exponential and combinatorial factors encouraging a most symmetric division. The resulting distributions are very narrow, and stationarity severely restricts the range of an adjustable structural parameter

  5. Exponential Thurston maps and limits of quadratic differentials

    NASA Astrophysics Data System (ADS)

    Hubbard, John; Schleicher, Dierk; Shishikura, Mitsuhiro

    2009-01-01

    We give a topological characterization of postsingularly finite topological exponential maps, i.e., universal covers g\\colon{C}to{C}setminus\\{0\\} such that 0 has a finite orbit. Such a map either is Thurston equivalent to a unique holomorphic exponential map λ e^z or it has a topological obstruction called a degenerate Levy cycle. This is the first analog of Thurston's topological characterization theorem of rational maps, as published by Douady and Hubbard, for the case of infinite degree. One main tool is a theorem about the distribution of mass of an integrable quadratic differential with a given number of poles, providing an almost compact space of models for the entire mass of quadratic differentials. This theorem is given for arbitrary Riemann surfaces of finite type in a uniform way.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha

    The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.

  7. Regulation of Hemopoietic Stem Cell Turnover and Population Size in Neonatal Mice

    DTIC Science & Technology

    1975-04-01

    Following birth the hematopoietic stem cell population of the liver as measured by the in vivo spleen nodule assay (CFU) declines with a halving time...of about 48 hours. The stem cell population of the spleen grows exponentially with a doubling time of about 17 hours. In vitro incubation with high...single spleen colonies derived from neonatal liver and spleen CFU that both stem cell populations have a high self-renewal capacity. Thus, the decline in

  8. Changing Mindsets to Transform Security: Leader Development for an Unpredictable and Complex World

    DTIC Science & Technology

    2013-01-01

    fields of phys- ical science, the amount of information is doubling every one to two years, meaning that more than half of what a college student has...beyond a review of current events or it being at a “ informational ” level. Naval War College Professor Mackubin Owens stated in 2006, that, The new... information technology in education and training underpinned by a sta- ble and experienced academic community that can support the exponential growth

  9. Analytical solution for boundary heat fluxes from a radiating rectangular medium

    NASA Technical Reports Server (NTRS)

    Siegel, R.

    1991-01-01

    Reference is made to the work of Shah (1979) which demonstrated the possibility of partially integrating the radiative equations analytically to obtain an 'exact' solution. Shah's solution was given as a double integration of the modified Bessel function of order zero. Here, it is shown that the 'exact' solution for a rectangular region radiating to cold black walls can be conveniently derived, and expressed in simple form, by using an integral function, Sn, analogous to the exponential integral function appearing in plane-layer solutions.

  10. Clinical proteomics in kidney disease as an exponential technology: heading towards the disruptive phase.

    PubMed

    Sanchez-Niño, Maria Dolores; Sanz, Ana B; Ramos, Adrian M; Fernandez-Fernandez, Beatriz; Ortiz, Alberto

    2017-04-01

    Exponential technologies double in power or processing speed every year, whereas their cost halves. Deception and disruption are two key stages in the development of exponential technologies. Deception occurs when, after initial introduction, technologies are dismissed as irrelevant, while they continue to progress, perhaps not as fast or with so many immediate practical applications as initially thought. Twenty years after the first publications, clinical proteomics is still not available in most hospitals and some clinicians have felt deception at unfulfilled promises. However, there are indications that clinical proteomics may be entering the disruptive phase, where, once refined, technologies disrupt established industries or procedures. In this regard, recent manuscripts in CKJ illustrate how proteomics is entering the clinical realm, with applications ranging from the identification of amyloid proteins in the pathology lab, to a new generation of urinary biomarkers for chronic kidney disease (CKD) assessment and outcome prediction. Indeed, one such panel of urinary peptidomics biomarkers, CKD273, recently received a Food and Drug Administration letter of support, the first ever in the CKD field. In addition, a must-read resource providing information on kidney disease-related proteomics and systems biology databases and how to access and use them in clinical decision-making was also recently published in CKJ .

  11. Decision Support System for hydrological extremes

    NASA Astrophysics Data System (ADS)

    Bobée, Bernard; El Adlouni, Salaheddine

    2014-05-01

    The study of the tail behaviour of extreme event distributions is important in several applied statistical fields such as hydrology, finance, and telecommunications. For example in hydrology, it is important to estimate adequately extreme quantiles in order to build and manage safe and effective hydraulic structures (dams, for example). Two main classes of distributions are used in hydrological frequency analysis: the class D of sub-exponential (Gamma (G2), Gumbel, Halphen type A (HA), Halphen type B (HB)…) and the class C of regularly varying distributions (Fréchet, Log-Pearson, Halphen type IB …) with a heavier tail. A Decision Support System (DSS) based on the characterization of the right tail, corresponding low probability of excedence p (high return period T=1/p, in hydrology), has been developed. The DSS allows discriminating between the class C and D and in its last version, a new prior step is added in order to test Lognormality. Indeed, the right tail of the Lognormal distribution (LN) is between the tails of distributions of the classes C and D; studies indicated difficulty with the discrimination between LN and distributions of the classes C and D. Other tools are useful to discriminate between distributions of the same class D (HA, HB and G2; see other communication). Some numerical illustrations show that, the DSS allows discriminating between Lognormal, regularly varying and sub-exponential distributions; and lead to coherent conclusions. Key words: Regularly varying distributions, subexponential distributions, Decision Support System, Heavy tailed distribution, Extreme value theory

  12. Transition from Exponential to Power Law Income Distributions in a Chaotic Market

    NASA Astrophysics Data System (ADS)

    Pellicer-Lostao, Carmen; Lopez-Ruiz, Ricardo

    Economy is demanding new models, able to understand and predict the evolution of markets. To this respect, Econophysics offers models of markets as complex systems, that try to comprehend macro-, system-wide states of the economy from the interaction of many agents at micro-level. One of these models is the gas-like model for trading markets. This tries to predict money distributions in closed economies and quite simply, obtains the ones observed in real economies. However, it reveals technical hitches to explain the power law distribution, observed in individuals with high incomes. In this work, nonlinear dynamics is introduced in the gas-like model in an effort to overcomes these flaws. A particular chaotic dynamics is used to break the pairing symmetry of agents (i, j) ⇔ (j, i). The results demonstrate that a "chaotic gas-like model" can reproduce the Exponential and Power law distributions observed in real economies. Moreover, it controls the transition between them. This may give some insight of the micro-level causes that originate unfair distributions of money in a global society. Ultimately, the chaotic model makes obvious the inherent instability of asymmetric scenarios, where sinks of wealth appear and doom the market to extreme inequality.

  13. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    NASA Astrophysics Data System (ADS)

    Jarvis, A. J.; Jarvis, S. J.; Hewitt, C. N.

    2015-10-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end-use. With respect to energy, the growth of industrial society appears to have been near-exponential for the last 160 years. We provide evidence that indicates that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near-optimal directed networks (roads, railways, flight paths, pipelines, cables etc.). However, despite this continual striving for optimisation, the distribution efficiencies of these networks must decline over time as they expand due to path lengths becoming longer and more tortuous. Therefore, to maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system, namely at the points of acquisition and end-use of resources. We postulate that the maintenance of the growth of industrial society, as measured by global energy use, at the observed rate of ~ 2.4 % yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  14. A mathematical model for the occurrence of historical events

    NASA Astrophysics Data System (ADS)

    Ohnishi, Teruaki

    2017-12-01

    A mathematical model was proposed for the frequency distribution of historical inter-event time τ. A basic ingredient was constructed by assuming the significance of a newly occurring historical event depending on the magnitude of a preceding event, the decrease of its significance by oblivion during the successive events, and an independent Poisson process for the occurrence of the event. The frequency distribution of τ was derived by integrating the basic ingredient with respect to all social fields and to all stake holders. The function of such a distribution was revealed as the forms of an exponential type, a power law type or an exponential-with-a-tail type depending on the values of constants appearing in the ingredient. The validity of this model was studied by applying it to the two cases of Modern China and Northern Ireland Troubles, where the τ-distribution varies depending on the different countries interacting with China and on the different stage of history of the Troubles, respectively. This indicates that history is consisted from many components with such different types of τ-distribution, which are the similar situation to the cases of other general human activities.

  15. On stable Pareto laws in a hierarchical model of economy

    NASA Astrophysics Data System (ADS)

    Chebotarev, A. M.

    2007-01-01

    This study considers a model of the income distribution of agents whose pairwise interaction is asymmetric and price-invariant. Asymmetric transactions are typical for chain-trading groups who arrange their business such that commodities move from senior to junior partners and money moves in the opposite direction. The price-invariance of transactions means that the probability of a pairwise interaction is a function of the ratio of incomes, which is independent of the price scale or absolute income level. These two features characterize the hierarchical model. The income distribution in this class of models is a well-defined double-Pareto function, which possesses Pareto tails for the upper and lower incomes. For gross and net upper incomes, the model predicts definite values of the Pareto exponents, agross and anet, which are stable with respect to quantitative variation of the pair-interaction. The Pareto exponents are also stable with respect to the choice of a demand function within two classes of status-dependent behavior of agents: linear demand ( agross=1, anet=2) and unlimited slowly varying demand ( agross=anet=1). For the sigmoidal demand that describes limited returns, agross=anet=1+α, with some α>0 satisfying a transcendental equation. The low-income distribution may be singular or vanishing in the neighborhood of the minimal income; in any case, it is L1-integrable and its Pareto exponent is given explicitly. The theory used in the present study is based on a simple balance equation and new results from multiplicative Markov chains and exponential moments of random geometric progressions.

  16. Determine Neuronal Tuning Curves by Exploring Optimum Firing Rate Distribution for Information Efficiency

    PubMed Central

    Han, Fang; Wang, Zhijie; Fan, Hong

    2017-01-01

    This paper proposed a new method to determine the neuronal tuning curves for maximum information efficiency by computing the optimum firing rate distribution. Firstly, we proposed a general definition for the information efficiency, which is relevant to mutual information and neuronal energy consumption. The energy consumption is composed of two parts: neuronal basic energy consumption and neuronal spike emission energy consumption. A parameter to model the relative importance of energy consumption is introduced in the definition of the information efficiency. Then, we designed a combination of exponential functions to describe the optimum firing rate distribution based on the analysis of the dependency of the mutual information and the energy consumption on the shape of the functions of the firing rate distributions. Furthermore, we developed a rapid algorithm to search the parameter values of the optimum firing rate distribution function. Finally, we found with the rapid algorithm that a combination of two different exponential functions with two free parameters can describe the optimum firing rate distribution accurately. We also found that if the energy consumption is relatively unimportant (important) compared to the mutual information or the neuronal basic energy consumption is relatively large (small), the curve of the optimum firing rate distribution will be relatively flat (steep), and the corresponding optimum tuning curve exhibits a form of sigmoid if the stimuli distribution is normal. PMID:28270760

  17. Properties of single NMDA receptor channels in human dentate gyrus granule cells

    PubMed Central

    Lieberman, David N; Mody, Istvan

    1999-01-01

    Cell-attached single-channel recordings of NMDA channels were carried out in human dentate gyrus granule cells acutely dissociated from slices prepared from hippocampi surgically removed for the treatment of temporal lobe epilepsy (TLE). The channels were activated by l-aspartate (250–500 nm) in the presence of saturating glycine (8 μm). The main conductance was 51 ± 3 pS. In ten of thirty granule cells, clear subconductance states were observed with a mean conductance of 42 ± 3 pS, representing 8 ± 2% of the total openings. The mean open times varied from cell to cell, possibly owing to differences in the epileptogenicity of the tissue of origin. The mean open time was 2.70 ± 0.95 ms (range, 1.24–4.78 ms). In 87% of the cells, three exponential components were required to fit the apparent open time distributions. In the remaining neurons, as in control rat granule cells, two exponentials were sufficient. Shut time distributions were fitted by five exponential components. The average numbers of openings in bursts (1.74 ± 0.09) and clusters (3.06 ± 0.26) were similar to values obtained in rodents. The mean burst (6.66 ± 0.9 ms), cluster (20.1 ± 3.3 ms) and supercluster lengths (116.7 ± 17.5 ms) were longer than those in control rat granule cells, but approached the values previously reported for TLE (kindled) rats. As in rat NMDA channels, adjacent open and shut intervals appeared to be inversely related to each other, but it was only the relative areas of the three open time constants that changed with adjacent shut time intervals. The long openings of human TLE NMDA channels resembled those produced by calcineurin inhibitors in control rat granule cells. Yet the calcineurin inhibitor FK-506 (500 nm) did not prolong the openings of human channels, consistent with a decreased calcineurin activity in human TLE. Many properties of the human NMDA channels resemble those recorded in rat hippocampal neurons. Both have similar slope conductances, five exponential shut time distributions, complex groupings of openings, and a comparable number of openings per grouping. Other properties of human TLE NMDA channels correspond to those observed in kindling; the openings are considerably long, requiring an additional exponential component to fit their distributions, and inhibition of calcineurin is without effect in prolonging the openings. PMID:10373689

  18. Calcium Isotope Analysis with "Peak Cut" Method on Column Chemistry

    NASA Astrophysics Data System (ADS)

    Zhu, H.; Zhang, Z.; Liu, F.; Li, X.

    2017-12-01

    To eliminate isobaric interferences from elemental and molecular isobars (e.g., 40K+, 48Ti+, 88Sr2+, 24Mg16O+, 27Al16O+) on Ca isotopes during mass determination, samples should be purified through ion-exchange column chemistry before analysis. However, large Ca isotopic fractionation has been observed during column chemistry (Russell and Papanastassiou, 1978; Zhu et al., 2016). Therefore, full recovery during column chemistry is greatly needed, otherwise uncertainties would be caused by poor recovery (Zhu et al., 2016). Generally, matrix effects could be enhanced by full recovery, as other elements might overlap with Ca cut during column chemistry. Matrix effects and full recovery are difficult to balance and both need to be considered for high-precision analysis of stable Ca isotopes. Here, we investigate the influence of poor recovery on δ44/40Ca using TIMS with the double spike technique. The δ44/40Ca values of IAPSO seawater, ML3B-G and BHVO-2 in different Ca subcats (e.g., 0-20, 20-40, 40-60, 60-80, 80-100%) with 20% Ca recovery on column chemistry display limited variation after correction by the 42Ca-43Ca double spike technique with the exponential law. Notably, δ44/40Ca of each Ca subcut is quite consistent with δ44/40Ca of Ca cut with full recovery within error. Our results indicate that the 42Ca-43Ca double spike technique can simultaneously correct both of the Ca isotopic fractionation that occurred during column chemistry and thermal ionization mass spectrometry (TIMS) determination properly, because both of the isotopic fractionation occurred during analysis follow the exponential law well. Therefore, we propose the "peak cut" method on Ca column chemistry for samples with complex matrix effects. Briefly, for samples with low Ca contents, we can add the double spike before column chemistry, and only collect the middle of the Ca eluate and abandon the both sides of Ca eluate that might overlap with other elements (e.g., K, Sr). This method would eliminate matrix effects and improve efficiency for the column chemistry.

  19. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  20. Base stock system for patient vs impatient customers with varying demand distribution

    NASA Astrophysics Data System (ADS)

    Fathima, Dowlath; Uduman, P. Sheik

    2013-09-01

    An optimal Base-Stock inventory policy for Patient and Impatient Customers using finite-horizon models is examined. The Base stock system for Patient and Impatient customers is a different type of inventory policy. In case of the model I, Base stock for Patient customer case is evaluated using the Truncated Exponential Distribution. The model II involves the study of Base-stock inventory policies for Impatient customer. A study on these systems reveals that the Customers wait until the arrival of the next order or the customers leaves the system which leads to lost sale. In both the models demand during the period [0, t] is taken to be a random variable. In this paper, Truncated Exponential Distribution satisfies the Base stock policy for the patient customer as a continuous model. So far the Base stock for Impatient Customers leaded to a discrete case but, in this paper we have modeled this condition into a continuous case. We justify this approach mathematically and also numerically.

  1. The topology of large Open Connectome networks for the human brain.

    PubMed

    Gastner, Michael T; Ódor, Géza

    2016-06-07

    The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.

  2. The topology of large Open Connectome networks for the human brain

    NASA Astrophysics Data System (ADS)

    Gastner, Michael T.; Ódor, Géza

    2016-06-01

    The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.

  3. The Superstatistical Nature and Interoccurrence Time of Atmospheric Mercury Concentration Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, F.; Bruno, A. G.; Naccarato, A.; De Simone, F.; Gencarelli, C. N.; Sprovieri, F.; Hedgecock, I. M.; Landis, M. S.; Skov, H.; Pfaffhuber, K. A.; Read, K. A.; Martin, L.; Angot, H.; Dommergue, A.; Magand, O.; Pirrone, N.

    2018-01-01

    The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed threshold Q in the data, the PDFs of the interoccurrence time of the Hg0 data are well described by a Tsallis q-exponential function. This PDF behavior has been explained in the framework of superstatistics, where the competition between multiple mesoscopic processes affects the macroscopic dynamics. An extensive parameter μ, encompassing all possible fluctuations related to mesoscopic phenomena, has been identified. It follows a χ2 distribution, indicative of the superstatistical nature of the overall process. Shuffling the data series destroys the long-term memory, the distributions become independent of Q, and the PDFs collapse on to the same exponential distribution. The possible central role of atmospheric turbulence on extreme events in the Hg0 data is highlighted.

  4. Periodicity and global exponential stability of generalized Cohen-Grossberg neural networks with discontinuous activations and mixed delays.

    PubMed

    Wang, Dongshu; Huang, Lihong

    2014-03-01

    In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Probability Distributions for Random Quantum Operations

    NASA Astrophysics Data System (ADS)

    Schultz, Kevin

    Motivated by uncertainty quantification and inference of quantum information systems, in this work we draw connections between the notions of random quantum states and operations in quantum information with probability distributions commonly encountered in the field of orientation statistics. This approach identifies natural sample spaces and probability distributions upon these spaces that can be used in the analysis, simulation, and inference of quantum information systems. The theory of exponential families on Stiefel manifolds provides the appropriate generalization to the classical case. Furthermore, this viewpoint motivates a number of additional questions into the convex geometry of quantum operations relative to both the differential geometry of Stiefel manifolds as well as the information geometry of exponential families defined upon them. In particular, we draw on results from convex geometry to characterize which quantum operations can be represented as the average of a random quantum operation. This project was supported by the Intelligence Advanced Research Projects Activity via Department of Interior National Business Center Contract Number 2012-12050800010.

  6. GISAXS modelling of helium-induced nano-bubble formation in tungsten and comparison with TEM

    NASA Astrophysics Data System (ADS)

    Thompson, Matt; Sakamoto, Ryuichi; Bernard, Elodie; Kirby, Nigel; Kluth, Patrick; Riley, Daniel; Corr, Cormac

    2016-05-01

    Grazing-incidence small angle x-ray scattering (GISAXS) is a powerful non-destructive technique for the measurement of nano-bubble formation in tungsten under helium plasma exposure. Here, we present a comparative study between transmission electron microscopy (TEM) and GISAXS measurements of nano-bubble formation in tungsten exposed to helium plasma in the Large Helical Device (LHD) fusion experiment. Both techniques are in excellent agreement, suggesting that nano-bubbles range from spheroidal to ellipsoidal, displaying exponential diameter distributions with mean diameters μ=0.68 ± 0.04 nm and μ=0.6 ± 0.1 nm measured by TEM and GISAXS respectively. Depth distributions were also computed, with calculated exponential depth distributions with mean depths of 8.4 ± 0.5 nm and 9.1 ± 0.4 nm for TEM and GISAXS. In GISAXS modelling, spheroidal particles were fitted with an aspect ratio ε=0.7 ± 0.1. The GISAXS model used is described in detail.

  7. Changes in speed distribution: Applying aggregated safety effect models to individual vehicle speeds.

    PubMed

    Vadeby, Anna; Forsman, Åsa

    2017-06-01

    This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Momentum distributions for the quantum delta-kicked rotor with decoherence

    PubMed

    Vant; Ball; Christensen

    2000-05-01

    We report on the momentum distribution line shapes for the quantum delta-kicked rotor in the presence of environment induced decoherence. Experimental and numerical results are presented. In the experiment ultracold cesium atoms are subjected to a pulsed standing wave of near resonant light. Spontaneous scattering of photons destroys dynamical localization. For the scattering rates used in our experiment the momentum distribution shapes remain essentially exponential.

  9. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  10. Topics in the Sequential Design of Experiments

    DTIC Science & Technology

    1992-03-01

    decision , unless so designated by other documentation. 12a. DISTRIBUTION /AVAILABIIUTY STATEMENT 12b. DISTRIBUTION CODE Approved for public release...3 0 1992 D 14. SUBJECT TERMS 15. NUMBER OF PAGES12 Design of Experiments, Renewal Theory , Sequential Testing 1 2. PRICE CODE Limit Theory , Local...distributions for one parameter exponential families," by Michael Woodroofe. Stntca, 2 (1991), 91-112. [6] "A non linear renewal theory for a functional of

  11. Statistical properties of effective drought index (EDI) for Seoul, Busan, Daegu, Mokpo in South Korea

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyeok; Kim, Ki-Beom; Chang, Heon-Young

    2014-08-01

    Time series of drought indices has been considered mostly in view of temporal and spatial distributions of a drought index so far. Here we investigate the statistical properties of a daily Effective Drought Index (EDI) itself for Seoul, Busan, Daegu, Mokpo for the period of 100 years from 1913 to 2012. We have found that both in dry and wet seasons the distribution of EDI as a function of EDI follows the Gaussian function. In dry season the shape of the Gaussian function is characteristically broader than that in wet seasons. The total number of drought days during the period we have analyzed is related both to the mean value and more importantly to the standard deviation. We have also found that according to the distribution of the number of occasions where the EDI values of several consecutive days are all less than a threshold, the distribution follows the exponential distribution. The slope of the best fit becomes steeper not only as the critical EDI value becomes more negative but also as the number of consecutive days increases. The slope of the exponential distribution becomes steeper as the number of the city in which EDI is simultaneously less than a critical EDI in a row increases. Finally, we conclude by pointing out implications of our findings.

  12. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Constraining the double gluon distribution by the single gluon distribution

    DOE PAGES

    Golec-Biernat, Krzysztof; Lewandowska, Emilia; Serino, Mirko; ...

    2015-10-03

    We show how to consistently construct initial conditions for the QCD evolution equations for double parton distribution functions in the pure gluon case. We use to momentum sum rule for this purpose and a specific form of the known single gluon distribution function in the MSTW parameterization. The resulting double gluon distribution satisfies exactly the momentum sum rule and is parameter free. Furthermore, we study numerically its evolution with a hard scale and show the approximate factorization into product of two single gluon distributions at small values of x, whereas at large values of x the factorization is always violatedmore » in agreement with the sum rule.« less

  14. The Mass Distribution of Stellar-mass Black Holes

    NASA Astrophysics Data System (ADS)

    Farr, Will M.; Sravan, Niharika; Cantrell, Andrew; Kreidberg, Laura; Bailyn, Charles D.; Mandel, Ilya; Kalogera, Vicky

    2011-11-01

    We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and 5 high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically—as a power law, exponential, Gaussian, combination of two Gaussians, or log-normal distribution—and non-parametrically—as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power law, while the distribution of the combined sample is best fit by the exponential model. This difference indicates that the low-mass subsample is not consistent with being drawn from the distribution of the combined population. We examine the existence of a "gap" between the most massive neutron stars and the least massive black holes by considering the value, M 1%, of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. Our analysis generates posterior distributions for M 1%; the best model (the power law) fitted to the low-mass systems has a distribution of lower bounds with M 1%>4.3 M sun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M 1%>4.5 M sun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel et al., although our broad model selection analysis more reliably reveals the best-fit quantitative description of the underlying mass distribution. The results on the combined sample of low- and high-mass systems are in qualitative agreement with Fryer & Kalogera, although the presence of a mass gap remains theoretically unexplained.

  15. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    PubMed

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  16. Regulating Effect of Asymmetrical Impeller on the Flow Distributions of Double-sided Centrifugal Compressor

    NASA Astrophysics Data System (ADS)

    Yang, Ce; Liu, Yixiong; Yang, Dengfeng; Wang, Benjiang

    2017-11-01

    To achieve the rebalance of flow distributions of double-sided impellers, a method of improving the radius of rear impeller is presented in this paper. It is found that the flow distributions of front and rear impeller can be adjusted effectively by increasing the radius of rear impeller, thus improves the balance of flow distributions of front and rear impeller. Meanwhile, the working conversion mode process of double-sided centrifugal compressor is also changed. Further analysis shows that the flowrates of blade channels in front impeller are mainly influenced by the circumferential distributions of static pressure in the volute. But the flowrates of rear impeller blade channels are influenced by the outlet flow field of bent duct besides the effects of static pressure distributions in the volute. In the airflow interaction area downstream, the flowrate of blade channel is obviously smaller. By increasing the radius of rear impeller, the work capacity of rear impeller is enhanced, the working mode conversion process from parallel working mode of double-sided impeller to the single impeller working mode is delayed, and the stable working range of double-sided compressor is broadened.

  17. The effect of convective boundary condition on MHD mixed convection boundary layer flow over an exponentially stretching vertical sheet

    NASA Astrophysics Data System (ADS)

    Isa, Siti Suzilliana Putri Mohamed; Arifin, Norihan Md.; Nazar, Roslinda; Bachok, Norfifah; Ali, Fadzilah Md

    2017-12-01

    A theoretical study that describes the magnetohydrodynamic mixed convection boundary layer flow with heat transfer over an exponentially stretching sheet with an exponential temperature distribution has been presented herein. This study is conducted in the presence of convective heat exchange at the surface and its surroundings. The system is controlled by viscous dissipation and internal heat generation effects. The governing nonlinear partial differential equations are converted into ordinary differential equations by a similarity transformation. The converted equations are then solved numerically using the shooting method. The results related to skin friction coefficient, local Nusselt number, velocity and temperature profiles are presented for several sets of values of the parameters. The effects of the governing parameters on the features of the flow and heat transfer are examined in detail in this study.

  18. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  19. Fast self contained exponential random deviate algorithm

    NASA Astrophysics Data System (ADS)

    Fernández, Julio F.

    1997-03-01

    An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.

  20. Rainbow net analysis of VAXcluster system availability

    NASA Technical Reports Server (NTRS)

    Johnson, Allen M., Jr.; Schoenfelder, Michael A.

    1991-01-01

    A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.

  1. A closer look at the effect of preliminary goodness-of-fit testing for normality for the one-sample t-test.

    PubMed

    Rochon, Justine; Kieser, Meinhard

    2011-11-01

    Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.

  2. A Semi-Analytical Extraction Method for Interface and Bulk Density of States in Metal Oxide Thin-Film Transistors

    PubMed Central

    Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Peng, Junbiao

    2018-01-01

    A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously. PMID:29534492

  3. A UNIVERSAL NEUTRAL GAS PROFILE FOR NEARBY DISK GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bigiel, F.; Blitz, L., E-mail: bigiel@uni-heidelberg.de

    2012-09-10

    Based on sensitive CO measurements from HERACLES and H I data from THINGS, we show that the azimuthally averaged radial distribution of the neutral gas surface density ({Sigma}{sub HI}+ {Sigma}{sub H2}) in 33 nearby spiral galaxies exhibits a well-constrained universal exponential distribution beyond 0.2 Multiplication-Sign r{sub 25} (inside of which the scatter is large) with less than a factor of two scatter out to two optical radii r{sub 25}. Scaling the radius to r{sub 25} and the total gas surface density to the surface density at the transition radius, i.e., where {Sigma}{sub HI} and {Sigma}{sub H2} are equal, as wellmore » as removing galaxies that are interacting with their environment, yields a tightly constrained exponential fit with average scale length 0.61 {+-} 0.06 r{sub 25}. In this case, the scatter reduces to less than 40% across the optical disks (and remains below a factor of two at larger radii). We show that the tight exponential distribution of neutral gas implies that the total neutral gas mass of nearby disk galaxies depends primarily on the size of the stellar disk (influenced to some degree by the great variability of {Sigma}{sub H2} inside 0.2 Multiplication-Sign r{sub 25}). The derived prescription predicts the total gas mass in our sub-sample of 17 non-interacting disk galaxies to within a factor of two. Given the short timescale over which star formation depletes the H{sub 2} content of these galaxies and the large range of r{sub 25} in our sample, there appears to be some mechanism leading to these largely self-similar radial gas distributions in nearby disk galaxies.« less

  4. Electrostatic screening in classical Coulomb fluids: exponential or power-law decay or both? An investigation into the effect of dispersion interactions

    NASA Astrophysics Data System (ADS)

    Kjellander, Roland

    2006-04-01

    It is shown that the nature of the non-electrostatic part of the pair interaction potential in classical Coulomb fluids can have a profound influence on the screening behaviour. Two cases are compared: (i) when the non-electrostatic part equals an arbitrary finite-ranged interaction and (ii) when a dispersion r-6 interaction potential is included. A formal analysis is done in exact statistical mechanics, including an investigation of the bridge function. It is found that the Coulombic r-1 and the dispersion r-6 potentials are coupled in a very intricate manner as regards the screening behaviour. The classical one-component plasma (OCP) is a particularly clear example due to its simplicity and is investigated in detail. When the dispersion r-6 potential is turned on, the screened electrostatic potential from a particle goes from a monotonic exponential decay, exp(-κr)/r, to a power-law decay, r-8, for large r. The pair distribution function acquire, at the same time, an r-10 decay for large r instead of the exponential one. There still remains exponentially decaying contributions to both functions, but these contributions turn oscillatory when the r-6 interaction is switched on. When the Coulomb interaction is turned off but the dispersion r-6 pair potential is kept, the decay of the pair distribution function for large r goes over from the r-10 to an r-6 behaviour, which is the normal one for fluids of electroneutral particles with dispersion interactions. Differences and similarities compared to binary electrolytes are pointed out.

  5. Taming the runaway problem of inflationary landscapes

    NASA Astrophysics Data System (ADS)

    Hall, Lawrence J.; Watari, Taizan; Yanagida, T. T.

    2006-05-01

    A wide variety of vacua, and their cosmological realization, may provide an explanation for the apparently anthropic choices of some parameters of particle physics and cosmology. If the probability on various parameters is weighted by volume, a flat potential for slow-roll inflation is also naturally understood, since the flatter the potential the larger the volume of the subuniverse. However, such inflationary landscapes have a serious problem, predicting an environment that makes it exponentially hard for observers to exist and giving an exponentially small probability for a moderate universe like ours. A general solution to this problem is proposed, and is illustrated in the context of inflaton decay and leptogenesis, leading to an upper bound on the reheating temperature in our subuniverse. In a particular scenario of chaotic inflation and nonthermal leptogenesis, predictions can be made for the size of CP violating phases, the rate of neutrinoless double beta decay and, in the case of theories with gauge-mediated weak-scale supersymmetry, for the fundamental scale of supersymmetry breaking.

  6. An improved cyan fluorescent protein variant useful for FRET.

    PubMed

    Rizzo, Mark A; Springer, Gerald H; Granada, Butch; Piston, David W

    2004-04-01

    Many genetically encoded biosensors use Förster resonance energy transfer (FRET) between fluorescent proteins to report biochemical phenomena in living cells. Most commonly, the enhanced cyan fluorescent protein (ECFP) is used as the donor fluorophore, coupled with one of several yellow fluorescent protein (YFP) variants as the acceptor. ECFP is used despite several spectroscopic disadvantages, namely a low quantum yield, a low extinction coefficient and a fluorescence lifetime that is best fit by a double exponential. To improve the characteristics of ECFP for FRET measurements, we used a site-directed mutagenesis approach to overcome these disadvantages. The resulting variant, which we named Cerulean (ECFP/S72A/Y145A/H148D), has a greatly improved quantum yield, a higher extinction coefficient and a fluorescence lifetime that is best fit by a single exponential. Cerulean is 2.5-fold brighter than ECFP and replacement of ECFP with Cerulean substantially improves the signal-to-noise ratio of a FRET-based sensor for glucokinase activation.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rout, Dipak; Vijaya, R.; Centre for Lasers and Photonics, Indian Institute of Technology Kanpur, Kanpur 208016

    Well-ordered opaline photonic crystals are grown by inward growing self-assembly method from Rhodamine B dye-doped polystyrene colloids. Subsequent to self-assembly, the crystals are infiltrated with gold nanoparticles of 40 nm diameter. Measurements of the stopband features and photoluminescence intensity from these crystals are supplemented by fluorescence decay time analysis. The fluorescence decay times from the dye-doped photonic crystals before and after the infiltration are dramatically different from each other. A lowered fluorescence decay time was observed for the case of gold infiltrated crystal along with an enhanced emission intensity. Double-exponential decay nature of the fluorescence from the dye-doped crystal gets convertedmore » into single-exponential decay upon the infiltration of gold nanoparticles due to the resonant radiative process resulting from the overlap of the surface plasmon resonance with the emission spectrum. The influence of localized surface plasmon due to gold nanoparticles on the increase in emission intensity and decrease in decay time of the emitters is established.« less

  8. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    PubMed

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  9. Location priority for non-formal early childhood education school based on promethee method and map visualization

    NASA Astrophysics Data System (ADS)

    Ayu Nurul Handayani, Hemas; Waspada, Indra

    2018-05-01

    Non-formal Early Childhood Education (non-formal ECE) is an education that is held for children under 4 years old. The implementation in District of Banyumas, Non-formal ECE is monitored by The District Government of Banyumas and helped by Sanggar Kegiatan Belajar (SKB) Purwokerto as one of the organizer of Non-formal Education. The government itself has a program for distributing ECE to all villages in Indonesia. However, The location to construct the ECE school in several years ahead is not arranged yet. Therefore, for supporting that program, a decision support system is made to give some recommendation villages for constructing The ECE building. The data are projected based on Brown’s Double Exponential Smoothing Method and utilizing Preference Ranking Organization Method for Enrichment Evaluation (Promethee) to generate priority order. As the recommendations system, it generates map visualization which is colored according to the priority level of sub-district and village area. The system was tested with black box testing, Promethee testing, and usability testing. The results showed that the system functionality and Promethee algorithm were working properly, and the user was satisfied.

  10. Bridging suture makes consistent and secure fixation in double-row rotator cuff repair.

    PubMed

    Fukuhara, Tetsutaro; Mihata, Teruhisa; Jun, Bong Jae; Neo, Masashi

    2017-09-01

    Inconsistent tension distribution may decrease the biomechanical properties of the rotator cuff tendon after double-row repair, resulting in repair failure. The purpose of this study was to compare the tension distribution along the repaired rotator cuff tendon among three double-row repair techniques. In each of 42 fresh-frozen porcine shoulders, a simulated infraspinatus tendon tear was repaired by using 1 of 3 double-row techniques: (1) conventional double-row repair (no bridging suture); (2) transosseous-equivalent repair (bridging suture alone); and (3) compression double-row repair (which combined conventional double-row and bridging sutures). Each specimen underwent cyclic testing at a simulated shoulder abduction angle of 0° or 40° on a material-testing machine. Gap formation and tendon strain were measured during the 1st and 30th cycles. To evaluate tension distribution after cuff repair, difference in gap and tendon strain between the superior and inferior fixations was compared among three double-row techniques. At an abduction angle of 0°, gap formation after either transosseous-equivalent or compression double-row repair was significantly less than that after conventional double-row repair (p < 0.01). During the 30th cycle, both transosseous-equivalent repair (p = 0.02) and compression double-row repair (p = 0.01) at 0° abduction had significantly less difference in gap formation between the superior and inferior fixations than did conventional double-row repair. After the 30th cycle, the difference in longitudinal strain between the superior and inferior fixations at 0° abduction was significantly less with compression double-row repair (2.7% ± 2.4%) than with conventional double-row repair (8.6% ± 5.5%, p = 0.03). Bridging sutures facilitate consistent and secure fixation in double-row rotator cuff repairs, suggesting that bridging sutures may be beneficial for distributing tension equally among all sutures during double-row repair of rotator cuff tears. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  11. Double plasma resonance instability as a source of solar zebra emission

    NASA Astrophysics Data System (ADS)

    Benáček, J.; Karlický, M.

    2018-03-01

    Context. The double plasma resonance (DPR) instability plays a basic role in the generation of solar radio zebras. In the plasma, consisting of the loss-cone type distribution of hot electrons and much denser and colder background plasma, this instability generates the upper-hybrid waves, which are then transformed into the electromagnetic waves and observed as radio zebras. Aims: In the present paper we numerically study the double plasma resonance instability from the point of view of the zebra interpretation. Methods: We use a 3-dimensional electromagnetic particle-in-cell (3D PIC) relativistic model. We use this model in two versions: (a) a spatially extended "multi-mode" model and (b) a spatially limited "specific-mode" model. While the multi-mode model is used for detailed computations and verifications of the results obtained by the "specific-mode" model, the specific-mode model is used for computations in a broad range of model parameters, which considerably save computational time. For an analysis of the computational results, we developed software tools in Python. Results: First using the multi-mode model, we study details of the double plasma resonance instability. We show how the distribution function of hot electrons changes during this instability. Then we show that there is a very good agreement between results obtained by the multi-mode and specific-mode models, which is caused by a dominance of the wave with the maximal growth rate. Therefore, for computations in a broad range of model parameters, we use the specific-mode model. We compute the maximal growth rates of the double plasma resonance instability with a dependence on the ratio between the upper-hybrid ωUH and electron-cyclotron ωce frequency. We vary temperatures of both the hot and background plasma components and study their effects on the resulting growth rates. The results are compared with the analytical ones. We find a very good agreement between numerical and analytical growth rates. We also compute saturation energies of the upper-hybrid waves in a very broad range of parameters. We find that the saturation energies of the upper-hybrid waves show maxima and minima at almost the same values of ωUH/ωce as the growth rates, but with a higher contrast between them than the growth rate maxima and minima. The contrast between saturation energy maxima and minima increases when the temperature of hot electrons increases. Furthermore, we find that the saturation energy of the upper-hybrid waves is proportional to the density of hot electrons. The maximum saturated energy can be up to one percent of the kinetic energy of hot electrons. Finally we find that the saturation energy maxima in the interval of ωUH/ωce = 3-18 decrease according to the exponential function. All these findings can be used in the interpretation of solar radio zebras.

  12. How extreme was the October 2015 flood in the Carolinas? An assessment of flood frequency analysis and distribution tails

    NASA Astrophysics Data System (ADS)

    Phillips, R. C.; Samadi, S. Z.; Meadows, M. E.

    2018-07-01

    This paper examines the frequency, distribution tails, and peak-over-threshold (POT) of extreme floods through analysis that centers on the October 2015 flooding in North Carolina (NC) and South Carolina (SC), United States (US). The most striking features of the October 2015 flooding were a short time to peak (Tp) and a multi-hour continuous flood peak which caused intensive and widespread damages to human lives, properties, and infrastructure. The 2015 flooding was produced by a sequence of intense rainfall events which originated from category 4 hurricane Joaquin over a period of four days. Here, the probability distribution and distribution parameters (i.e., location, scale, and shape) of floods were investigated by comparing the upper part of empirical distributions of the annual maximum flood (AMF) and POT with light- to heavy- theoretical tails: Fréchet, Pareto, Gumbel, Weibull, Beta, and Exponential. Specifically, four sets of U.S. Geological Survey (USGS) gauging data from the central Carolinas with record lengths from approximately 65-125 years were used. Analysis suggests that heavier-tailed distributions are in better agreement with the POT and somewhat AMF data than more often used exponential (light) tailed probability distributions. Further, the threshold selection and record length affect the heaviness of the tail and fluctuations of the parent distributions. The shape parameter and its evolution in the period of record play a critical and poorly understood role in determining the scaling of flood response to intense rainfall.

  13. Reactor Statics Module, RS-9: Multigroup Diffusion Program Using an Exponential Acceleration Technique.

    ERIC Educational Resources Information Center

    Macek, Victor C.

    The nine Reactor Statics Modules are designed to introduce students to the use of numerical methods and digital computers for calculation of neutron flux distributions in space and energy which are needed to calculate criticality, power distribution, and fuel burnup for both slow neutron and fast neutron fission reactors. The last module, RS-9,…

  14. Distinguishing Response Conflict and Task Conflict in the Stroop Task: Evidence from Ex-Gaussian Distribution Analysis

    ERIC Educational Resources Information Center

    Steinhauser, Marco; Hubner, Ronald

    2009-01-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were…

  15. K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents

    ERIC Educational Resources Information Center

    Gwanyama, Philip Wagala

    2005-01-01

    The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…

  16. Individual and group dynamics in purchasing activity

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Guo, Jin-Li; Fan, Chao; Liu, Xue-Jiao

    2013-01-01

    As a major part of the daily operation in an enterprise, purchasing frequency is in constant change. Recent approaches on the human dynamics can provide some new insights into the economic behavior of companies in the supply chain. This paper captures the attributes of creation times of purchase orders to an individual vendor, as well as to all vendors, and further investigates whether they have some kind of dynamics by applying logarithmic binning to the construction of distribution plots. It’s found that the former displays a power-law distribution with approximate exponent 2.0, while the latter is fitted by a mixture distribution with both power-law and exponential characteristics. Obviously, two distinctive characteristics are presented for the interval time distribution from the perspective of individual dynamics and group dynamics. Actually, this mixing feature can be attributed to the fitting deviations as they are negligible for individual dynamics, but those of different vendors are cumulated and then lead to an exponential factor for group dynamics. To better describe the mechanism generating the heterogeneity of the purchase order assignment process from the objective company to all its vendors, a model driven by product life cycle is introduced, and then the analytical distribution and the simulation result are obtained, which are in good agreement with the empirical data.

  17. In situ observations of snow particle size distributions over a cold frontal rainband within an extratropical cyclone

    NASA Astrophysics Data System (ADS)

    Yang, Jiefan; Lei, Hengchi

    2016-02-01

    Cloud microphysical properties of a mixed phase cloud generated by a typical extratropical cyclone in the Tongliao area, Inner Mongolia on 3 May 2014, are analyzed primarily using in situ flight observation data. This study is mainly focused on ice crystal concentration, supercooled cloud water content, and vertical distributions of fit parameters of snow particle size distributions (PSDs). The results showed several discrepancies of microphysical properties obtained during two penetrations. During penetration within precipitating cloud, the maximum ice particle concentration, liquid water content, and ice water content were increased by a factor of 2-3 compared with their counterpart obtained during penetration of a nonprecipitating cloud. The heavy rimed and irregular ice crystals obtained by 2D imagery probe as well as vertical distributions of fitting parameters within precipitating cloud show that the ice particles grow during falling via riming and aggregation process, whereas the lightly rimed and pristine ice particles as well as fitting parameters within non-precipitating cloud indicate the domination of sublimation process. During the two cloud penetrations, the PSDs were generally better represented by gamma distributions than the exponential form in terms of the determining coefficient ( R 2). The correlations between parameters of exponential /gamma form within two penetrations showed no obvious differences compared with previous studies.

  18. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    NASA Astrophysics Data System (ADS)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government policies in China on the changes of dynamics of GDP and the three industries adjustment. The work in our paper provides a new way to understand the dynamics of economic development.

  19. The Lunar Rock Size Frequency Distribution from Diviner Infrared Measurements

    NASA Astrophysics Data System (ADS)

    Elder, C. M.; Hayne, P. O.; Piqueux, S.; Bandfield, J.; Williams, J. P.; Ghent, R. R.; Paige, D. A.

    2016-12-01

    Knowledge of the rock size frequency distribution on a planetary body is important for understanding its geologic history and for selecting landing sites. The rock size frequency distribution can be estimated by counting rocks in high resolution images, but most bodies in the solar system have limited areas with adequate coverage. We propose an alternative method to derive and map rock size frequency distributions using multispectral thermal infrared data acquired at multiple times during the night. We demonstrate this new technique for the Moon using data from the Lunar Reconnaissance Orbiter (LRO) Diviner radiometer in conjunction with three dimensional thermal modeling, leveraging the differential cooling rates of different rock sizes. We assume an exponential rock size frequency distribution, which has been shown to yield a good fit to rock populations in various locations on the Moon, Mars, and Earth [2, 3] and solve for the best radiance fits as a function of local time and wavelength. This method presents several advantages: 1) unlike other thermally derived rock abundance techniques, it is sensitive to rocks smaller than the diurnal skin depth; 2) it does not result in apparent decrease in rock abundance at night; and 3) it can be validated using images taken at the lunar surface. This method yields both the fraction of the surface covered in rocks of all sizes and the exponential factor, which defines the rate of drop-off in the exponential function at large rock sizes. We will present maps of both these parameters for the Moon, and provide a geological interpretation. In particular, this method reveals rocks in the lunar highlands that are smaller than previous thermal methods could detect. [1] Bandfield J. L. et al. (2011) JGR, 116, E00H02. [2] Golombek and Rapp (1997) JGR, 102, E2, 4117-4129. [3] Cintala, M.J. and K.M. McBride (1995) NASA Technical Memorandum 104804.

  20. Hazard function analysis for flood planning under nonstationarity

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2016-05-01

    The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.

  1. Growth and differentiation of human lens epithelial cells in vitro on matrix

    NASA Technical Reports Server (NTRS)

    Blakely, E. A.; Bjornstad, K. A.; Chang, P. Y.; McNamara, M. P.; Chang, E.; Aragon, G.; Lin, S. P.; Lui, G.; Polansky, J. R.

    2000-01-01

    PURPOSE: To characterize the growth and maturation of nonimmortalized human lens epithelial (HLE) cells grown in vitro. METHODS: HLE cells, established from 18-week prenatal lenses, were maintained on bovine corneal endothelial (BCE) extracellular matrix (ECM) in medium supplemented with basic fibroblast growth factor (FGF-2). The identity, growth, and differentiation of the cultures were characterized by karyotyping, cell morphology, and growth kinetics studies, reverse transcription-polymerase chain reaction (RT-PCR), immunofluorescence, and Western blot analysis. RESULTS: HLE cells had a male, human diploid (2N = 46) karyotype. The population-doubling time of exponentially growing cells was 24 hours. After 15 days in culture, cell morphology changed, and lentoid formation was evident. Reverse transcription-polymerase chain reaction (RT-PCR) indicated expression of alphaA- and betaB2-crystallin, fibroblast growth factor receptor 1 (FGFR1), and major intrinsic protein (MIP26) in exponential growth. Western analyses of protein extracts show positive expression of three immunologically distinct classes of crystallin proteins (alphaA-, alphaB-, and betaB2-crystallin) with time in culture. By Western blot analysis, expression of p57(KIP2), a known marker of terminally differentiated fiber cells, was detectable in exponential cultures, and levels increased after confluence. MIP26 and gamma-crystallin protein expression was detected in confluent cultures, by using immunofluorescence, but not in exponentially growing cells. CONCLUSIONS: HLE cells can be maintained for up to 4 months on ECM derived from BCE cells in medium containing FGF-2. With time in culture, the cells demonstrate morphologic characteristics of, and express protein markers for, lens fiber cell differentiation. This in vitro model will be useful for investigations of radiation-induced cataractogenesis and other studies of lens toxicity.

  2. The Universal Statistical Distributions of the Affinity, Equilibrium Constants, Kinetics and Specificity in Biomolecular Recognition

    PubMed Central

    Zheng, Xiliang; Wang, Jin

    2015-01-01

    We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453

  3. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  4. Markov Analysis of Sleep Dynamics

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.

    2009-05-01

    A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.

  5. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  6. Some properties of the Catalan-Qi function related to the Catalan numbers.

    PubMed

    Qi, Feng; Mahmoud, Mansour; Shi, Xiao-Ting; Liu, Fang-Fang

    2016-01-01

    In the paper, the authors find some properties of the Catalan numbers, the Catalan function, and the Catalan-Qi function which is a generalization of the Catalan numbers. Concretely speaking, the authors present a new expression, asymptotic expansions, integral representations, logarithmic convexity, complete monotonicity, minimality, logarithmically complete monotonicity, a generating function, and inequalities of the Catalan numbers, the Catalan function, and the Catalan-Qi function. As by-products, an exponential expansion and a double inequality for the ratio of two gamma functions are derived.

  7. Double Wigner distribution function of a first-order optical system with a hard-edge aperture.

    PubMed

    Pan, Weiqing

    2008-01-01

    The effect of an apertured optical system on Wigner distribution can be expressed as a superposition integral of the input Wigner distribution function and the double Wigner distribution function of the apertured optical system. By introducing a hard aperture function into a finite sum of complex Gaussian functions, the double Wigner distribution functions of a first-order optical system with a hard aperture outside and inside it are derived. As an example of application, the analytical expressions of the Wigner distribution for a Gaussian beam passing through a spatial filtering optical system with an internal hard aperture are obtained. The analytical results are also compared with the numerical integral results, and they show that the analytical results are proper and ascendant.

  8. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  9. Non-Markovian Infection Spread Dramatically Alters the Susceptible-Infected-Susceptible Epidemic Threshold in Networks

    NASA Astrophysics Data System (ADS)

    Van Mieghem, P.; van de Bovenkamp, R.

    2013-03-01

    Most studies on susceptible-infected-susceptible epidemics in networks implicitly assume Markovian behavior: the time to infect a direct neighbor is exponentially distributed. Much effort so far has been devoted to characterize and precisely compute the epidemic threshold in susceptible-infected-susceptible Markovian epidemics on networks. Here, we report the rather dramatic effect of a nonexponential infection time (while still assuming an exponential curing time) on the epidemic threshold by considering Weibullean infection times with the same mean, but different power exponent α. For three basic classes of graphs, the Erdős-Rényi random graph, scale-free graphs and lattices, the average steady-state fraction of infected nodes is simulated from which the epidemic threshold is deduced. For all graph classes, the epidemic threshold significantly increases with the power exponents α. Hence, real epidemics that violate the exponential or Markovian assumption can behave seriously differently than anticipated based on Markov theory.

  10. Discrete sudden perturbation theory for inelastic scattering. I. Quantum and semiclassical treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cross, R.J.

    1985-12-01

    A double perturbation theory is constructed to treat rotationally and vibrationally inelastic scattering. It uses both the elastic scattering from the spherically averaged potential and the infinite-order sudden (IOS) approximation as the unperturbed solutions. First, a standard perturbation expansion is done to express the radial wave functions in terms of the elastic wave functions. The resulting coupled equations are transformed to the discrete-variable representation where the IOS equations are diagonal. Then, the IOS solutions are removed from the equations which are solved by an exponential perturbation approximation. The results for Ar+N/sub 2/ are very much more accurate than the IOSmore » and somewhat more accurate than a straight first-order exponential perturbation theory. The theory is then converted into a semiclassical, time-dependent form by using the WKB approximation. The result is an integral of the potential times a slowly oscillating factor over the classical trajectory. A method of interpolating the result is given so that the calculation is done at the average velocity for a given transition. With this procedure, the semiclassical version of the theory is more accurate than the quantum version and very much faster. Calculations on Ar+N/sub 2/ show the theory to be much more accurate than the infinite-order sudden (IOS) approximation and the exponential time-dependent perturbation theory.« less

  11. Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Beier, Hope T.

    2016-03-01

    Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.

  12. The distribution of interstellar dust in CALIFA edge-on galaxies via oligochromatic radiative transfer fitting

    NASA Astrophysics Data System (ADS)

    De Geyter, Gert; Baes, Maarten; Camps, Peter; Fritz, Jacopo; De Looze, Ilse; Hughes, Thomas M.; Viaene, Sébastien; Gentile, Gianfranco

    2014-06-01

    We investigate the amount and spatial distribution of interstellar dust in edge-on spiral galaxies, using detailed radiative transfer modelling of a homogeneous sample of 12 galaxies selected from the Calar Alto Legacy Integral Field Area survey. Our automated fitting routine, FITSKIRT, was first validated against artificial data. This is done by simultaneously reproducing the Sloan Digital Sky Survey g-, r-, i- and z-band observations of a toy model in order to combine the information present in the different bands. We show that this combined, oligochromatic fitting has clear advantages over standard monochromatic fitting especially regarding constraints on the dust properties. We model all galaxies in our sample using a three-component model, consisting of a double-exponential disc to describe the stellar and dust discs and using a Sérsic profile to describe the central bulge. The full model contains 19 free parameters, and we are able to constrain all these parameters to a satisfactory level of accuracy without human intervention or strong boundary conditions. Apart from two galaxies, the entire sample can be accurately reproduced by our model. We find that the dust disc is about 75 per cent more extended but only half as high as the stellar disc. The average face-on optical depth in the V band is 0.76 and the spread of 0.60 within our sample is quite substantial, which indicates that some spiral galaxies are relatively opaque even when seen face-on.

  13. Star Formation Rate Distribution in the Galaxy NGC 1232

    NASA Astrophysics Data System (ADS)

    Araújo de Souza, Alexandre; Martins, Lucimara P.; Rodríguez-Ardila, Alberto; Fraga, Luciano

    2018-06-01

    NGC 1232 is a face-on spiral galaxy and a great laboratory for the study of star formation due to its proximity. We obtained high spatial resolution Hα images of this galaxy, with adaptive optics, using the SAM instrument at the SOAR telescope, and used these images to study its H II regions. These observations allowed us to produce the most complete H II region catalog for it to date, with a total of 976 sources. This doubles the number of H II regions previously found for this object. We used these data to construct the H II luminosity function, and obtained a power-law index lower than the typical values found for Sc galaxies. This shallower slope is related to the presence of a significant number of high-luminosity H II regions (log L > 39 dex). We also constructed the size distribution function, verifying that, as for most galaxies, NGC 1232 follows an exponential law. We also used the Hα luminosity to calculate the star formation rate. An extremely interesting fact about this galaxy is that X-ray diffuse observations suggest that NGC 1232 recently suffered a collision with a dwarf galaxy. We found an absence of star formation around the region where the X-ray emission is more intense, which we interpret as a star formation quenching due to the collision. Along with that, we found an excess of star-forming regions in the northeast part of the galaxy, where the X-ray emission is less intense.

  14. Two solar proton fluence models based on ground level enhancement observations

    NASA Astrophysics Data System (ADS)

    Raukunen, Osku; Vainio, Rami; Tylka, Allan J.; Dietrich, William F.; Jiggens, Piers; Heynderickx, Daniel; Dierckxsens, Mark; Crosby, Norma; Ganse, Urs; Siipola, Robert

    2018-01-01

    Solar energetic particles (SEPs) constitute an important component of the radiation environment in interplanetary space. Accurate modeling of SEP events is crucial for the mitigation of radiation hazards in spacecraft design. In this study we present two new statistical models of high energy solar proton fluences based on ground level enhancement (GLE) observations during solar cycles 19-24. As the basis of our modeling, we utilize a four parameter double power law function (known as the Band function) fits to integral GLE fluence spectra in rigidity. In the first model, the integral and differential fluences for protons with energies between 10 MeV and 1 GeV are calculated using the fits, and the distributions of the fluences at certain energies are modeled with an exponentially cut-off power law function. In the second model, we use a more advanced methodology: by investigating the distributions and relationships of the spectral fit parameters we find that they can be modeled as two independent and two dependent variables. Therefore, instead of modeling the fluences separately at different energies, we can model the shape of the fluence spectrum. We present examples of modeling results and show that the two methodologies agree well except for a short mission duration (1 year) at low confidence level. We also show that there is a reasonable agreement between our models and three well-known solar proton models (JPL, ESP and SEPEM), despite the differences in both the modeling methodologies and the data used to construct the models.

  15. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  16. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  17. Probability distribution of financial returns in a model of multiplicative Brownian motion with stochastic diffusion coefficient

    NASA Astrophysics Data System (ADS)

    Silva, Antonio

    2005-03-01

    It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225

  18. The competitiveness versus the wealth of a country.

    PubMed

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y; Stanley, H Eugene

    2012-01-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008-2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers.

  19. Study of velocity and temperature distributions in boundary layer flow of fourth grade fluid over an exponential stretching sheet

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Saeed, Umair Bin; Sultan, Faqiha; Ullah, Saif; Rehman, Abdul

    2018-02-01

    This study deals with the investigation of boundary layer flow of a fourth grade fluid and heat transfer over an exponential stretching sheet. For analyzing two heating processes, namely, (i) prescribed surface temperature (PST), and (ii) prescribed heat flux (PHF), the temperature distribution in a fluid has been considered. The suitable transformations associated with the velocity components and temperature, have been employed for reducing the nonlinear model equation to a system of ordinary differential equations. The flow and temperature fields are revealed by solving these reduced nonlinear equations through an effective analytical method. The important findings in this analysis are to observe the effects of viscoelastic, cross-viscous, third grade fluid, and fourth grade fluid parameters on the constructed analytical expression for velocity profile. Likewise, the heat transfer properties are studied for Prandtl and Eckert numbers.

  20. The competitiveness versus the wealth of a country

    PubMed Central

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y.; Stanley, H. Eugene

    2012-01-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008–2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers. PMID:22997552

  1. Inter-occurrence times and universal laws in finance, earthquakes and genomes

    NASA Astrophysics Data System (ADS)

    Tsallis, Constantino

    2016-07-01

    A plethora of natural, artificial and social systems exist which do not belong to the Boltzmann-Gibbs (BG) statistical-mechanical world, based on the standard additive entropy $S_{BG}$ and its associated exponential BG factor. Frequent behaviors in such complex systems have been shown to be closely related to $q$-statistics instead, based on the nonadditive entropy $S_q$ (with $S_1=S_{BG}$), and its associated $q$-exponential factor which generalizes the usual BG one. In fact, a wide range of phenomena of quite different nature exist which can be described and, in the simplest cases, understood through analytic (and explicit) functions and probability distributions which exhibit some universal features. Universality classes are concomitantly observed which can be characterized through indices such as $q$. We will exhibit here some such cases, namely concerning the distribution of inter-occurrence (or inter-event) times in the areas of finance, earthquakes and genomes.

  2. The competitiveness versus the wealth of a country

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y.; Stanley, H. Eugene

    2012-09-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008-2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers.

  3. Time scale defined by the fractal structure of the price fluctuations in foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Kumagai, Yoshiaki

    2010-04-01

    In this contribution, a new time scale named C-fluctuation time is defined by price fluctuations observed at a given resolution. The intraday fractal structures and the relations of the three time scales: real time (physical time), tick time and C-fluctuation time, in foreign exchange markets are analyzed. The data set used is trading prices of foreign exchange rates; US dollar (USD)/Japanese yen (JPY), USD/Euro (EUR), and EUR/JPY. The accuracy of the data is one minute and data within a minute are recorded in order of transaction. The series of instantaneous velocity of C-fluctuation time flowing are exponentially distributed for small C when they are measured by real time and for tiny C when they are measured by tick time. When the market is volatile, for larger C, the series of instantaneous velocity are exponentially distributed.

  4. A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard; Attele, Rohan; Koshak, William

    2011-01-01

    A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.

  5. Photodissociation dynamics of Mo(CO) sub 6 at 266 and 355 nm: CO photofragment kinetic-energy and internal-state distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buntin, S.A.; Cavanagh, R.R.; Richter, L.J.

    1991-06-15

    The internal-state and kinetic-energy distributions of the CO photofragments from the 266 and 355 nm photolysis of Mo(CO){sub 6} have been measured under collision-free conditions using vacuum-ultraviolet laser-induced fluorescence. The rotational-state distributions for CO({ital v}{double prime}=0) and ({ital v}{double prime}=1) are well represented by Boltzmann distributions with effective rotational temperatures'' of {ital T}{sub {ital r}}({ital v}{double prime}=0)=950{plus minus}70 K and {ital T}{sub {ital r}}({ital v}{double prime}=1)=935{plus minus}85 K for 266 nm and {ital T}{sub {ital r}}({ital v}{double prime}=0)=750{plus minus}70 K and {ital T}{sub {ital r}}({ital v}{double prime}=1)=1150{plus minus}250 K for 355 nm photolysis. The CO({ital v}{double prime}=1/{ital v}{double prime}=0) vibrational-statemore » ratios for 266 and 355 nm photolysis are 0.19{plus minus}0.03 and 0.09{plus minus}0.02, respectively. The Doppler-broadened CO photofragment line shapes indicate that the translational energy distributions are isotropic and Maxwellian. There is no photolysis-laser wavelength or internal-state dependence to the extracted translational temperatures.'' The observed energy partitioning and kinetic-energy distributions are inconsistent with an impulsive ejection of a single CO ligand. CO photofragment line shapes for 266 nm photolysis are not consistent with a mechanism involving the repulsive ejection of the first CO ligand, followed by the statistical decomposition of the Mo(CO){sub 5} fragment. While phase-space theories do not predict quantitatively the energy disposal, the photodissociation mechanism appears to be dominated by statistical considerations. The results also suggest that the photodissociation of Mo(CO){sub 6} at 266 and 355 nm involves a common initial state'' and that similar exit channel effects are operative.« less

  6. Geomorphic effectiveness of long profile shape and role of inherent geological controls, Ganga River Basin, India

    NASA Astrophysics Data System (ADS)

    Sonam, Sonam; Jain, Vikrant

    2017-04-01

    River long profile is one of the fundamental geomorphic parameters which provides a platform to study interaction of geological and geomorphic processes at different time scales. Long profile shape is governed by geological processes at 10 ^ 5 - 10 ^ 6 years' time scale and it controls the modern day (10 ^ 0 - 10 ^ 1 years' time scale) fluvial processes by controlling the spatial variability of channel slope. Identification of an appropriate model for river long profile may provide a tool to analyse the quantitative relationship between basin geology, profile shape and its geomorphic effectiveness. A systematic analysis of long profiles has been carried for the Himalayan tributaries of the Ganga River basin. Long profile shape and stream power distribution pattern is derived using SRTM DEM data (90 m spatial resolution). Peak discharge data from 34 stations is used for hydrological analysis. Lithological variability and major thrusts are marked along the river long profile. The best fit of long profile is analysed for power, logarithmic and exponential function. Second order exponential function provides the best representation of long profiles. The second order exponential equation is Z = K1*exp(-β1*L) + K2*exp(-β2*L), where Z is elevation of channel long profile, L is the length, K and β are coefficients of the exponential function. K1 and K2 are the proportion of elevation change of the long profile represented by β1 (fast) and β2 (slow) decay coefficients of the river long profile. Different values of coefficients express the variability in long profile shapes and is related with the litho-tectonic variability of the study area. Channel slope of long profile is estimated taking the derivative of exponential function. Stream power distribution pattern along long profile is estimated by superimposing the discharge and long profile slope. Sensitivity analysis of stream power distribution with decay coefficients of the second order exponential equation is evaluated for a range of coefficient values. Our analysis suggests that the amplitude of stream power peak value is dependent on K1, the proportion of elevation change coming under the fast decay exponent and the location of stream power peak is dependent of the long profile decay coefficient (β1). Different long profile shapes owing to litho-tectonic variability across the Himalayas are responsible for spatial variability of stream power distribution pattern. Most of the stream power peaks lie in the Higher Himalaya. In general, eastern rivers have higher stream power in hinterland area and low stream power in the alluvial plains. This is responsible for, 1) higher erosion rate and sediment supply in hinterland of eastern rivers, 2) the incised and stable nature of channels in the western alluvial plains and 3) aggrading channels with dynamic nature in the eastern alluvial plains. Our study shows that the spatial variability of litho-units defines the coefficients of long profile function which in turn controls the position and magnitude of stream power maxima and hence the geomorphic variability in a fluvial system.

  7. Cell responses to single pheromone molecules may reflect the activation kinetics of olfactory receptor molecules.

    PubMed

    Minor, A V; Kaissling, K-E

    2003-03-01

    Olfactory receptor cells of the silkmoth Bombyx mori respond to single pheromone molecules with "elementary" electrical events that appear as discrete "bumps" a few milliseconds in duration, or bursts of bumps. As revealed by simulation, one bump may result from a series of random openings of one or several ion channels, producing an average inward membrane current of 1.5 pA. The distributions of durations of bumps and of gaps between bumps in a burst can be fitted by single exponentials with time constants of 10.2 ms and 40.5 ms, respectively. The distribution of burst durations is a sum of two exponentials; the number of bumps per burst obeyed a geometric distribution (mean 3.2 bumps per burst). Accordingly the elementary events could reflect transitions among three states of the pheromone receptor molecule: the vacant receptor (state 1), the pheromone-receptor complex (state 2), and the activated complex (state 3). The calculated rate constants of the transitions between states are k(21)=7.7 s(-1), k(23)=16.8 s(-1), and k(32)=98 s(-1).

  8. The stationary non-equilibrium plasma of cosmic-ray electrons and positrons

    NASA Astrophysics Data System (ADS)

    Tomaschitz, Roman

    2016-06-01

    The statistical properties of the two-component plasma of cosmic-ray electrons and positrons measured by the AMS-02 experiment on the International Space Station and the HESS array of imaging atmospheric Cherenkov telescopes are analyzed. Stationary non-equilibrium distributions defining the relativistic electron-positron plasma are derived semi-empirically by performing spectral fits to the flux data and reconstructing the spectral number densities of the electronic and positronic components in phase space. These distributions are relativistic power-law densities with exponential cutoff, admitting an extensive entropy variable and converging to the Maxwell-Boltzmann or Fermi-Dirac distributions in the non-relativistic limit. Cosmic-ray electrons and positrons constitute a classical (low-density high-temperature) plasma due to the low fugacity in the quantized partition function. The positron fraction is assembled from the flux densities inferred from least-squares fits to the electron and positron spectra and is subjected to test by comparing with the AMS-02 flux ratio measured in the GeV interval. The calculated positron fraction extends to TeV energies, predicting a broad spectral peak at about 1 TeV followed by exponential decay.

  9. A new probability distribution model of turbulent irradiance based on Born perturbation theory

    NASA Astrophysics Data System (ADS)

    Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo

    2010-10-01

    The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.

  10. Effects of topologies on signal propagation in feedforward networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  11. Effects of topologies on signal propagation in feedforward networks.

    PubMed

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  12. A study of personal income distributions in Australia and Italy

    NASA Astrophysics Data System (ADS)

    Banerjee, Anand; Yakovenko, Victor

    2006-03-01

    The study of income distribution has a long history. A century ago, the Italian physicist and economist Pareto proposed that income distribution obeys a universal power law, valid for all time and countries. Subsequent studies proved that only the top 1-3% of the population follow a power law. For USA, the rest 97-99% of the population follow the exponential distribution [1]. We present the results of a similar study for Australia and Italy. [1] A. C. Silva and V. M. Yakovenko, Europhys. Lett.69, 304 (2005).

  13. Very low threshold-current temperature sensitivity in constricted double-heterojunction AlGaAs lasers

    NASA Technical Reports Server (NTRS)

    Botez, D.; Connolly, J. C.; Gilbert, D. B.; Ettenberg, M.

    1981-01-01

    The temperature dependence of threshold currents in constricted double-heterojunction diode lasers with strong lateral mode confinement is found to be significantly milder than for other types of lasers. The threshold-current relative variations with ambient temperature are typically two to three times less than for other devices of CW-operation capability. Over the interval 10-70 C the threshold currents fit the empirical exponential law exp/(T2-T1)/T0/ with T0 values in the 240-375 C range in pulsed operation, and in the 200-310 C range in CW operation. The external differential quantum efficiency and the mode far-field pattern near threshold are virtually invariant with temperature. The possible causes of high-T0 behavior are analyzed, and a new phenomenon - temperature-dependent current focusing - is presented to explain the results.

  14. Characteristic length of the knotting probability revisited

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2015-09-01

    We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(-N/NK), where the estimates of parameter NK are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius rex, i.e. the screening length of double-stranded DNA.

  15. Applying elliptic curve cryptography to a chaotic synchronisation system: neural-network-based approach

    NASA Astrophysics Data System (ADS)

    Hsiao, Feng-Hsiag

    2017-10-01

    In order to obtain double encryption via elliptic curve cryptography (ECC) and chaotic synchronisation, this study presents a design methodology for neural-network (NN)-based secure communications in multiple time-delay chaotic systems. ECC is an asymmetric encryption and its strength is based on the difficulty of solving the elliptic curve discrete logarithm problem which is a much harder problem than factoring integers. Because it is much harder, we can get away with fewer bits to provide the same level of security. To enhance the strength of the cryptosystem, we conduct double encryption that combines chaotic synchronisation with ECC. According to the improved genetic algorithm, a fuzzy controller is synthesised to realise the exponential synchronisation and achieves optimal H∞ performance by minimising the disturbances attenuation level. Finally, a numerical example with simulations is given to demonstrate the effectiveness of the proposed approach.

  16. Transverse momentum in double parton scattering: factorisation, evolution and matching

    NASA Astrophysics Data System (ADS)

    Buffing, Maarten G. A.; Diehl, Markus; Kasemets, Tomas

    2018-01-01

    We give a description of double parton scattering with measured transverse momenta in the final state, extending the formalism for factorisation and resummation developed by Collins, Soper and Sterman for the production of colourless particles. After a detailed analysis of their colour structure, we derive and solve evolution equations in rapidity and renormalisation scale for the relevant soft factors and double parton distributions. We show how in the perturbative regime, transverse momentum dependent double parton distributions can be expressed in terms of simpler nonperturbative quantities and compute several of the corresponding perturbative kernels at one-loop accuracy. We then show how the coherent sum of single and double parton scattering can be simplified for perturbatively large transverse momenta, and we discuss to which order resummation can be performed with presently available results. As an auxiliary result, we derive a simple form for the square root factor in the Collins construction of transverse momentum dependent parton distributions.

  17. Characterizations of double pulsing in neutron multiplicity and coincidence counting systems

    DOE PAGES

    Koehler, Katrina E.; Henzl, Vladimir; Croft, Stephen; ...

    2016-06-29

    Passive neutron coincidence/multiplicity counters are subject to non-ideal behavior, such as double pulsing and dead time. It has been shown in the past that double-pulsing exhibits a distinct signature in a Rossi-alpha distribution, which is not readily noticed using traditional Multiplicity Shift Register analysis. But, it has been assumed that the use of a pre-delay in shift register analysis removes any effects of double pulsing. Here, we use high-fidelity simulations accompanied by experimental measurements to study the effects of double pulsing on multiplicity rates. By exploiting the information from the double pulsing signature peak observable in the Rossi-alpha distribution, themore » double pulsing fraction can be determined. Algebraic correction factors for the multiplicity rates in terms of the double pulsing fraction have been developed. We also discuss the role of these corrections across a range of scenarios.« less

  18. Learning Search Control Knowledge for Deep Space Network Scheduling

    NASA Technical Reports Server (NTRS)

    Gratch, Jonathan; Chien, Steve; DeJong, Gerald

    1993-01-01

    While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.

  19. Inland empire logistics GIS mapping project.

    DOT National Transportation Integrated Search

    2009-01-01

    The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...

  20. Anomalous Diffusion in a Trading Model

    NASA Astrophysics Data System (ADS)

    Khidzir, Sidiq Mohamad; Wan Abdullah, Wan Ahmad Tajuddin

    2009-07-01

    The result of the trading model by Chakrabarti et al. [1] is the wealth distribution with a mixed exponential and power law distribution. Based on the motivation of studying the dynamics behind the flow of money similar to work done by Brockmann [2, 3] we track the flow of money in this trading model to observe anomalous diffusion in the form of long waiting times and Levy Flights.

  1. Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*

    NASA Astrophysics Data System (ADS)

    Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.

    2016-04-01

    The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.

  2. Investigation of Dielectric Breakdown Characteristics for Double-break Vacuum Interrupter and Dielectric Breakdown Probability Distribution in Vacuum Interrupter

    NASA Astrophysics Data System (ADS)

    Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi

    To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.

  3. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  4. Droplet size and velocity distributions for spray modelling

    NASA Astrophysics Data System (ADS)

    Jones, D. P.; Watkins, A. P.

    2012-01-01

    Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.

  5. Improvements to Shortwave Absorption in the GFDL General Circulation Model Radiation Code

    NASA Astrophysics Data System (ADS)

    Freidenreich, S.

    2015-12-01

    The multiple-band shortwave radiation parameterization used in the GFDL general circulation models is being revised to better simulate the disposition of the solar flux in comparison with line-by-line+doubling-adding reference calculations based on the HITRAN 2012 catalog. For clear skies, a notable deficiency in the older formulation is an underestimate of atmospheric absorption. The two main reasons for this is the neglecting of both H2O absorption for wavenumbers < 2500 cm-1 and the O2 continuum. Further contributions to this underestimate are due to neglecting the effects of CH4, N2O and stratospheric H2O absorption. These issues are addressed in the revised formulation and result in globally average shortwave absorption increasing from 74 to 78 Wm-2. The number of spectral bands considered remains the same (18), but the number of pseudomonochromatic intervals (based mainly on the exponential-sum-fit technique) for the determination of H2O absorption is increased from 38 to 74, allowing for more accuracy in its simulation. Also, CO2 absorption is now determined by the exponential-sum-fit technique, replacing an algebraic absorptivity expression in the older parameterization; this improves the simulation of the heating in the stratosphere. Improvements to the treatment of multiple scattering are currently being tested. This involves replacing the current algorithm, which consists of the two stream delta-Eddington, with a four stream algorithm. Initial results show that in most, but not all cases these produce better agreement with the reference doubling-adding results.

  6. Alternative definition of excitation amplitudes in multi-reference state-specific coupled cluster

    NASA Astrophysics Data System (ADS)

    Garniron, Yann; Giner, Emmanuel; Malrieu, Jean-Paul; Scemama, Anthony

    2017-04-01

    A central difficulty of state-specific Multi-Reference Coupled Cluster (MR-CC) in the multi-exponential Jeziorski-Monkhorst formalism concerns the definition of the amplitudes of the single and double excitation operators appearing in the exponential wave operators. If the reference space is a complete active space (CAS), the number of these amplitudes is larger than the number of singly and doubly excited determinants on which one may project the eigenequation, and one must impose additional conditions. The present work first defines a state-specific reference-independent operator T˜ ^ m which acting on the CAS component of the wave function |Ψ0m⟩ maximizes the overlap between (1 +T˜ ^ m ) |Ψ0m⟩ and the eigenvector of the CAS-SD (Singles and Doubles) Configuration Interaction (CI) matrix |ΨCAS-SDm⟩ . This operator may be used to generate approximate coefficients of the triples and quadruples, and a dressing of the CAS-SD CI matrix, according to the intermediate Hamiltonian formalism. The process may be iterated to convergence. As a refinement towards a strict coupled cluster formalism, one may exploit reference-independent amplitudes provided by (1 +T˜ ^ m ) |Ψ0m⟩ to define a reference-dependent operator T^ m by fitting the eigenvector of the (dressed) CAS-SD CI matrix. The two variants, which are internally uncontracted, give rather similar results. The new MR-CC version has been tested on the ground state potential energy curves of 6 molecules (up to triple-bond breaking) and two excited states. The non-parallelism error with respect to the full-CI curves is of the order of 1 mEh.

  7. Complex Dynamic Development of Poliovirus Membranous Replication Complexes

    PubMed Central

    Nair, Vinod; Hansen, Bryan T.; Hoyt, Forrest H.; Fischer, Elizabeth R.; Ehrenfeld, Ellie

    2012-01-01

    Replication of all positive-strand RNA viruses is intimately associated with membranes. Here we utilize electron tomography and other methods to investigate the remodeling of membranes in poliovirus-infected cells. We found that the viral replication structures previously described as “vesicles” are in fact convoluted, branching chambers with complex and dynamic morphology. They are likely to originate from cis-Golgi membranes and are represented during the early stages of infection by single-walled connecting and branching tubular compartments. These early viral organelles gradually transform into double-membrane structures by extension of membranous walls and/or collapsing of the luminal cavity of the single-membrane structures. As the double-membrane regions develop, they enclose cytoplasmic material. At this stage, a continuous membranous structure may have double- and single-walled membrane morphology at adjacent cross-sections. In the late stages of the replication cycle, the structures are represented mostly by double-membrane vesicles. Viral replication proteins, double-stranded RNA species, and actively replicating RNA are associated with both double- and single-membrane structures. However, the exponential phase of viral RNA synthesis occurs when single-membrane formations are predominant in the cell. It has been shown previously that replication complexes of some other positive-strand RNA viruses form on membrane invaginations, which result from negative membrane curvature. Our data show that the remodeling of cellular membranes in poliovirus-infected cells produces structures with positive curvature of membranes. Thus, it is likely that there is a fundamental divergence in the requirements for the supporting cellular membrane-shaping machinery among different groups of positive-strand RNA viruses. PMID:22072780

  8. Interactions and Supramolecular Organization of Sulfonated Indigo and Thioindigo Dyes in Layered Hydroxide Hosts.

    PubMed

    Costa, Ana L; Gomes, Ana C; Pereira, Ricardo C; Pillinger, Martyn; Gonçalves, Isabel S; Pineiro, Marta; Seixas de Melo, J Sérgio

    2018-01-09

    Supramolecularly organized host-guest systems have been synthesized by intercalating water-soluble forms of indigo (indigo carmine, IC) and thioindigo (thioindigo-5,5'-disulfonate, TIS) in zinc-aluminum-layered double hydroxides (LDHs) and zinc-layered hydroxide salts (LHSs) by coprecipitation routes. The colors of the isolated powders were dark blue for hybrids containing only IC, purplish blue or dark lilac for cointercalated samples containing both dyes, and ruby/wine for hybrids containing only TIS. The as-synthesized and thermally treated materials were characterized by Fourier transform infrared, Fourier transform Raman, and nuclear magnetic resonance spectroscopies, powder X-ray diffraction, scanning electron microscopy, and elemental and thermogravimetric analyses. The basal spacings found for IC-LDH, TIS-LDH, IC-LHS, and TIS-LHS materials were 21.9, 21.05, 18.95, and 21.00 Å, respectively, with intermediate spacings being observed for the cointercalated samples that either decreased (LDHs) or increased (LHSs) with increasing TIS content. UV-visible and fluorescence spectroscopies (steady-state and time-resolved) were used to probe the molecular distribution of the immobilized dyes. The presence of aggregates together with the monomer units is suggested for IC-LDH, whereas for TIS-LDH, IC-LHS, and TIS-LHS, the dyes are closer to the isolated situation. Accordingly, while emission from the powder H 2 TIS is strongly quenched, an increment in the emission of about 1 order of magnitude was observed for the TIS-LDH/LHS hybrids. Double-exponential fluorescence decays were obtained and associated with two monomer species interacting differently with cointercalated water molecules. The incorporation of both TIS and IC in the LDH and LHS hosts leads to an almost complete quenching of the fluorescence, pointing to a very efficient energy transfer process from (fluorescent) TIS to (nonfluorescent) IC.

  9. Double Tunneling Injection Quantum Dot Lasers for High Speed Operation

    DTIC Science & Technology

    2017-10-23

    Double Tunneling-Injection Quantum Dot Lasers for High -Speed Operation The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6...State University Title: Double Tunneling-Injection Quantum Dot Lasers for High -Speed Operation Report Term: 0-Other Email: asryan@vt.edu Distribution

  10. Exponential stabilization of magnetoelastic waves in a Mindlin-Timoshenko plate by localized internal damping

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-08-01

    This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.

  11. Monte Carlo calculations of PET coincidence timing: single and double-ended readout

    PubMed Central

    Derenzo, Stephen E; Choong, Woon-Seng; Moses, William W

    2016-01-01

    We present Monte Carlo computational methods for estimating the coincidence resolving time (CRT) of scintillator detector pairs in positron emission tomography (PET) and present results for Lu2SiO5 : Ce (LSO), LaBr3 : Ce, and a hypothetical ultra-fast scintillator with a 1 ns decay time. The calculations were applied to both single-ended and double-ended photodetector readout with constant-fraction triggering. They explicitly include (1) the intrinsic scintillator properties (luminosity, rise time, decay time, and index of refraction), (2) the exponentially distributed depths of interaction, (3) the optical photon transport efficiency, delay, and time dispersion, (4) the photodetector properties (fill factor, quantum efficiency, transit time jitter, and single electron response), and (5) the determination of the constant fraction trigger level that minimizes the CRT. The calculations for single-ended readout include the delayed photons from the opposite reflective surface. The calculations for double-ended readout include (1) the simple average of the two photodetector trigger times, (2) more accurate estimators of the annihilation photon entrance time using the pulse height ratio to estimate the depth of interaction and correct for annihilation photon, optical photon, and trigger delays, and (3) the statistical lower bound for interactions at the center of the crystal. For time-of-flight (TOF) PET we combine stopping power and TOF information in a figure of merit equal to the sensitivity gain relative to whole-body non-TOF PET using LSO. For LSO crystals 3 mm × 3 mm × 30 mm, a decay time of 37 ns, a total photoelectron count of 4000, and a photodetector with 0.2 ns full-width at half-maximum (fwhm) timing jitter, single-ended readout has a CRT of 0.16 ns fwhm and double-ended readout has a CRT of 0.111 ns fwhm. For LaBr3 : Ce crystals 3 mm × 3 mm × 30 mm, a rise time of 0.2 ns, a decay time of 18 ns, and a total of 7600 photoelectrons the CRT numbers are 0.14 ns and 0.072 ns fwhm, respectively. For a hypothetical ultra-fast scintillator 3 mm × 3 mm × 30 mm, a decay time of 1 ns, and a total of 4000 photoelectrons, the CRT numbers are 0.070 and 0.020 ns fwhm, respectively. Over a range of examples, values for double-ended readout are about 10% larger than the statistical lower bound. PMID:26350162

  12. Parabolic replicator dynamics and the principle of minimum Tsallis information gain

    PubMed Central

    2013-01-01

    Background Non-linear, parabolic (sub-exponential) and hyperbolic (super-exponential) models of prebiological evolution of molecular replicators have been proposed and extensively studied. The parabolic models appear to be the most realistic approximations of real-life replicator systems due primarily to product inhibition. Unlike the more traditional exponential models, the distribution of individual frequencies in an evolving parabolic population is not described by the Maximum Entropy (MaxEnt) Principle in its traditional form, whereby the distribution with the maximum Shannon entropy is chosen among all the distributions that are possible under the given constraints. We sought to identify a more general form of the MaxEnt principle that would be applicable to parabolic growth. Results We consider a model of a population that reproduces according to the parabolic growth law and show that the frequencies of individuals in the population minimize the Tsallis relative entropy (non-additive information gain) at each time moment. Next, we consider a model of a parabolically growing population that maintains a constant total size and provide an “implicit” solution for this system. We show that in this case, the frequencies of the individuals in the population also minimize the Tsallis information gain at each moment of the ‘internal time” of the population. Conclusions The results of this analysis show that the general MaxEnt principle is the underlying law for the evolution of a broad class of replicator systems including not only exponential but also parabolic and hyperbolic systems. The choice of the appropriate entropy (information) function depends on the growth dynamics of a particular class of systems. The Tsallis entropy is non-additive for independent subsystems, i.e. the information on the subsystems is insufficient to describe the system as a whole. In the context of prebiotic evolution, this “non-reductionist” nature of parabolic replicator systems might reflect the importance of group selection and competition between ensembles of cooperating replicators. Reviewers This article was reviewed by Viswanadham Sridhara (nominated by Claus Wilke), Puushottam Dixit (nominated by Sergei Maslov), and Nick Grishin. For the complete reviews, see the Reviewers’ Reports section. PMID:23937956

  13. Interference experiment with asymmetric double slit by using 1.2-MV field emission transmission electron microscope.

    PubMed

    Harada, Ken; Akashi, Tetsuya; Niitsu, Kodai; Shimada, Keiko; Ono, Yoshimasa A; Shindo, Daisuke; Shinada, Hiroyuki; Mori, Shigeo

    2018-01-17

    Advanced electron microscopy technologies have made it possible to perform precise double-slit interference experiments. We used a 1.2-MV field emission electron microscope providing coherent electron waves and a direct detection camera system enabling single-electron detections at a sub-second exposure time. We developed a method to perform the interference experiment by using an asymmetric double-slit fabricated by a focused ion beam instrument and by operating the microscope under a "pre-Fraunhofer" condition, different from the Fraunhofer condition of conventional double-slit experiments. Here, pre-Fraunhofer condition means that each single-slit observation was performed under the Fraunhofer condition, while the double-slit observations were performed under the Fresnel condition. The interference experiments with each single slit and with the asymmetric double slit were carried out under two different electron dose conditions: high-dose for calculation of electron probability distribution and low-dose for each single electron distribution. Finally, we exemplified the distribution of single electrons by color-coding according to the above three types of experiments as a composite image.

  14. Bimodal spatial distribution of macular pigment: evidence of a gender relationship

    NASA Astrophysics Data System (ADS)

    Delori, François C.; Goger, Douglas G.; Keilhauer, Claudia; Salvetti, Paola; Staurenghi, Giovanni

    2006-03-01

    The spatial distribution of the optical density of the human macular pigment measured by two-wavelength autofluorescence imaging exhibits in over half of the subjects an annulus of higher density superimposed on a central exponential-like distribution. This annulus is located at about 0.7° from the fovea. Women have broader distributions than men, and they are more likely to exhibit this bimodal distribution. Maxwell's spot reported by subjects matches the measured distribution of their pigment. Evidence that the shape of the foveal depression may be gender related leads us to hypothesize that differences in macular pigment distribution are related to anatomical differences in the shape of the foveal depression.

  15. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Bandi, Mahesh M.; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig’s XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  16. Distribution of fixed beneficial mutations and the rate of adaptation in asexual populations

    PubMed Central

    Good, Benjamin H.; Rouzine, Igor M.; Balick, Daniel J.; Hallatschek, Oskar; Desai, Michael M.

    2012-01-01

    When large asexual populations adapt, competition between simultaneously segregating mutations slows the rate of adaptation and restricts the set of mutations that eventually fix. This phenomenon of interference arises from competition between mutations of different strengths as well as competition between mutations that arise on different fitness backgrounds. Previous work has explored each of these effects in isolation, but the way they combine to influence the dynamics of adaptation remains largely unknown. Here, we describe a theoretical model to treat both aspects of interference in large populations. We calculate the rate of adaptation and the distribution of fixed mutational effects accumulated by the population. We focus particular attention on the case when the effects of beneficial mutations are exponentially distributed, as well as on a more general class of exponential-like distributions. In both cases, we show that the rate of adaptation and the influence of genetic background on the fixation of new mutants is equivalent to an effective model with a single selection coefficient and rescaled mutation rate, and we explicitly calculate these effective parameters. We find that the effective selection coefficient exactly coincides with the most common fixed mutational effect. This equivalence leads to an intuitive picture of the relative importance of different types of interference effects, which can shift dramatically as a function of the population size, mutation rate, and the underlying distribution of fitness effects. PMID:22371564

  17. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence.

    PubMed

    Bandi, Mahesh M; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig's XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  18. Anomalous NMR Relaxation in Cartilage Matrix Components and Native Cartilage: Fractional-Order Models

    PubMed Central

    Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-01-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095

  19. Anomalous NMR relaxation in cartilage matrix components and native cartilage: Fractional-order models

    NASA Astrophysics Data System (ADS)

    Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-06-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.

  20. Mathematical Aspects of Reliability-Centered Maintenance

    DTIC Science & Technology

    1977-01-01

    exponential distribu~tion, .whose parameter (-hazard rate) can be realistically estimated., La ma SuWiaWItib~ This distribution is als4.. frequently...statistical methods to the study ýf hysicA3 reality was beset with .philosc\\phicsl problems arising from the irrefutable observacion that there isibut one...STATISTICS, 2nd ed. New York: John Wiley & Sons ; 1954. 5. Kolmogorov, A. Interpolation und Extrapolation von stationwren zuf-lligen Folgen. BULL. DE

  1. Context-Sensitive Detection of Local Community Structure

    DTIC Science & Technology

    2011-04-01

    characters in the Victor Hugo novel Les Miserables (lesmis).[77 vertices, 254 edges] [Knu93]. • The neural network of the nematode C. Elegans (c.elegans...adjectives and nouns in the Novel David Cop- perfield by Charles Dickens.[112 vertices, 425 edges] [New06]. • Les Miserables . Co-appearance network of...exponential distribution. The degree distributions of the Network Science, Les Miserables , and Word Adjacencies networks display a similar heavy tail. By

  2. Complexity and Productivity Differentiation Models of Metallogenic Indicator Elements in Rocks and Supergene Media Around Daijiazhuang Pb-Zn Deposit in Dangchang County, Gansu Province

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jin-zhong, E-mail: viewsino@163.com; Yao, Shu-zhen; Zhang, Zhong-ping

    2013-03-15

    With the help of complexity indices, we quantitatively studied multifractals, frequency distributions, and linear and nonlinear characteristics of geochemical data for exploration of the Daijiazhuang Pb-Zn deposit. Furthermore, we derived productivity differentiation models of elements from thermodynamics and self-organized criticality of metallogenic systems. With respect to frequency distributions and multifractals, only Zn in rocks and most elements except Sb in secondary media, which had been derived mainly from weathering and alluviation, exhibit nonlinear distributions. The relations of productivity to concentrations of metallogenic elements and paragenic elements in rocks and those of elements strongly leached in secondary media can be seenmore » as linear addition of exponential functions with a characteristic weak chaos. The relations of associated elements such as Mo, Sb, and Hg in rocks and other elements in secondary media can be expressed as an exponential function, and the relations of one-phase self-organized geological or metallogenic processes can be represented by a power function, each representing secondary chaos or strong chaos. For secondary media, exploration data of most elements should be processed using nonlinear mathematical methods or should be transformed to linear distributions before processing using linear mathematical methods.« less

  3. Fluctuations in Wikipedia access-rate and edit-event data

    NASA Astrophysics Data System (ADS)

    Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev

    2012-12-01

    Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.

  4. Beyond Word Frequency: Bursts, Lulls, and Scaling in the Temporal Distributions of Words

    PubMed Central

    Altmann, Eduardo G.; Pierrehumbert, Janet B.; Motter, Adilson E.

    2009-01-01

    Background Zipf's discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well. Methodology/Principal Findings By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type – a measure of the logicality of each word – and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage. Conclusions/Significance Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf's law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics. PMID:19907645

  5. Cluster-cluster aggregation with particle replication and chemotaxy: a simple model for the growth of animal cells in culture

    NASA Astrophysics Data System (ADS)

    Alves, S. G.; Martins, M. L.

    2010-09-01

    Aggregation of animal cells in culture comprises a series of motility, collision and adhesion processes of basic relevance for tissue engineering, bioseparations, oncology research and in vitro drug testing. In the present paper, a cluster-cluster aggregation model with stochastic particle replication and chemotactically driven motility is investigated as a model for the growth of animal cells in culture. The focus is on the scaling laws governing the aggregation kinetics. Our simulations reveal that in the absence of chemotaxy the mean cluster size and the total number of clusters scale in time as stretched exponentials dependent on the particle replication rate. Also, the dynamical cluster size distribution functions are represented by a scaling relation in which the scaling function involves a stretched exponential of the time. The introduction of chemoattraction among the particles leads to distribution functions decaying as power laws with exponents that decrease in time. The fractal dimensions and size distributions of the simulated clusters are qualitatively discussed in terms of those determined experimentally for several normal and tumoral cell lines growing in culture. It is shown that particle replication and chemotaxy account for the simplest cluster size distributions of cellular aggregates observed in culture.

  6. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  7. Bayesian analysis of the kinetics of quantal transmitter secretion at the neuromuscular junction.

    PubMed

    Saveliev, Anatoly; Khuzakhmetova, Venera; Samigullin, Dmitry; Skorinkin, Andrey; Kovyazina, Irina; Nikolsky, Eugeny; Bukharaeva, Ellya

    2015-10-01

    The timing of transmitter release from nerve endings is considered nowadays as one of the factors determining the plasticity and efficacy of synaptic transmission. In the neuromuscular junction, the moments of release of individual acetylcholine quanta are related to the synaptic delays of uniquantal endplate currents recorded under conditions of lowered extracellular calcium. Using Bayesian modelling, we performed a statistical analysis of synaptic delays in mouse neuromuscular junction with different patterns of rhythmic nerve stimulation and when the entry of calcium ions into the nerve terminal was modified. We have obtained a statistical model of the release timing which is represented as the summation of two independent statistical distributions. The first of these is the exponentially modified Gaussian distribution. The mixture of normal and exponential components in this distribution can be interpreted as a two-stage mechanism of early and late periods of phasic synchronous secretion. The parameters of this distribution depend on both the stimulation frequency of the motor nerve and the calcium ions' entry conditions. The second distribution was modelled as quasi-uniform, with parameters independent of nerve stimulation frequency and calcium entry. Two different probability density functions for the distribution of synaptic delays suggest at least two independent processes controlling the time course of secretion, one of them potentially involving two stages. The relative contribution of these processes to the total number of mediator quanta released depends differently on the motor nerve stimulation pattern and on calcium ion entry into nerve endings.

  8. Lévy flight with absorption: A model for diffusing diffusivity with long tails

    NASA Astrophysics Data System (ADS)

    Jain, Rohit; Sebastian, K. L.

    2017-03-01

    We consider diffusion of a particle in rearranging environment, so that the diffusivity of the particle is a stochastic function of time. In our previous model of "diffusing diffusivity" [Jain and Sebastian, J. Phys. Chem. B 120, 3988 (2016), 10.1021/acs.jpcb.6b01527], it was shown that the mean square displacement of particle remains Fickian, i.e., ∝T at all times, but the probability distribution of particle displacement is not Gaussian at all times. It is exponential at short times and crosses over to become Gaussian only in a large time limit in the case where the distribution of D in that model has a steady state limit which is exponential, i.e., πe(D ) ˜e-D /D0 . In the present study, we model the diffusivity of a particle as a Lévy flight process so that D has a power-law tailed distribution, viz., πe(D ) ˜D-1 -α with 0 <α <1 . We find that in the short time limit, the width of displacement distribution is proportional to √{T }, implying that the diffusion is Fickian. But for long times, the width is proportional to T1 /2 α which is a characteristic of anomalous diffusion. The distribution function for the displacement of the particle is found to be a symmetric stable distribution with a stability index 2 α which preserves its shape at all times.

  9. Numerical computation of gravitational field of general extended body and its application to rotation curve study of galaxies

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2017-06-01

    Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface mass density distribution. Fortran 90 subroutines to execute these methods and their test programs and sample outputs are available from the author's WEB site: https://www.researchgate.net/profile/Toshio_Fukushima/

  10. Obstructive sleep apnea alters sleep stage transition dynamics.

    PubMed

    Bianchi, Matt T; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert

    2010-06-28

    Enhanced characterization of sleep architecture, compared with routine polysomnographic metrics such as stage percentages and sleep efficiency, may improve the predictive phenotyping of fragmented sleep. One approach involves using stage transition analysis to characterize sleep continuity. We analyzed hypnograms from Sleep Heart Health Study (SHHS) participants using the following stage designations: wake after sleep onset (WASO), non-rapid eye movement (NREM) sleep, and REM sleep. We show that individual patient hypnograms contain insufficient number of bouts to adequately describe the transition kinetics, necessitating pooling of data. We compared a control group of individuals free of medications, obstructive sleep apnea (OSA), medical co-morbidities, or sleepiness (n = 374) with mild (n = 496) or severe OSA (n = 338). WASO, REM sleep, and NREM sleep bout durations exhibited multi-exponential temporal dynamics. The presence of OSA accelerated the "decay" rate of NREM and REM sleep bouts, resulting in instability manifesting as shorter bouts and increased number of stage transitions. For WASO bouts, previously attributed to a power law process, a multi-exponential decay described the data well. Simulations demonstrated that a multi-exponential process can mimic a power law distribution. OSA alters sleep architecture dynamics by decreasing the temporal stability of NREM and REM sleep bouts. Multi-exponential fitting is superior to routine mono-exponential fitting, and may thus provide improved predictive metrics of sleep continuity. However, because a single night of sleep contains insufficient transitions to characterize these dynamics, extended monitoring of sleep, probably at home, would be necessary for individualized clinical application.

  11. Constant growth rate can be supported by decreasing energy flux and increasing aerobic glycolysis.

    PubMed

    Slavov, Nikolai; Budnik, Bogdan A; Schwab, David; Airoldi, Edoardo M; van Oudenaarden, Alexander

    2014-05-08

    Fermenting glucose in the presence of enough oxygen to support respiration, known as aerobic glycolysis, is believed to maximize growth rate. We observed increasing aerobic glycolysis during exponential growth, suggesting additional physiological roles for aerobic glycolysis. We investigated such roles in yeast batch cultures by quantifying O2 consumption, CO2 production, amino acids, mRNAs, proteins, posttranslational modifications, and stress sensitivity in the course of nine doublings at constant rate. During this course, the cells support a constant biomass-production rate with decreasing rates of respiration and ATP production but also decrease their stress resistance. As the respiration rate decreases, so do the levels of enzymes catalyzing rate-determining reactions of the tricarboxylic-acid cycle (providing NADH for respiration) and of mitochondrial folate-mediated NADPH production (required for oxidative defense). The findings demonstrate that exponential growth can represent not a single metabolic/physiological state but a continuum of changing states and that aerobic glycolysis can reduce the energy demands associated with respiratory metabolism and stress survival. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Picosecond absorption anisotropy of polymethine and squarylium dyes in liquid and polymeric media

    NASA Astrophysics Data System (ADS)

    Przhonska, Olga V.; Hagan, David J.; Novikov, Evgueni; Lepkowicz, Richard; Van Stryland, Eric W.; Bondar, Mikhail V.; Slominsky, Yuriy L.; Kachkovski, Alexei D.

    2001-11-01

    Time-resolved excitation-probe polarization measurements are performed for polymethine and squarylium dyes in ethanol and an elastopolymer of polyurethane acrylate (PUA). These molecules exhibit strong excited-state absorption in the visible, which results in reverse saturable absorption (RSA). In pump-probe experiments, we observe a strong angular dependence of the RSA decay kinetics upon variation of the angle between pump and probe polarizations. The difference in absorption anisotropy kinetics in ethanol and PUA is detected and analyzed. Anisotropy decay curves in ethanol follow a single exponential decay leading to complete depolarization of the excited state. We also observe complete depolarization in PUA, in which case the anisotropy decay follows a double exponential behavior. Possible rotations in the PUA polymeric matrix are connected with the existence of local microcavities of free volume. We believe that the fast decay component is connected with the rotation of molecular fragments and the slower decay component is connected with the rotation of entire molecules in local microcavities, which is possible because of the elasticity of the polymeric material.

  13. Systematic strategies for the third industrial accident prevention plan in Korea.

    PubMed

    Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung

    2012-01-01

    To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.

  14. Stochastic processes in the social sciences: Markets, prices and wealth distributions

    NASA Astrophysics Data System (ADS)

    Romero, Natalia E.

    The present work uses statistical mechanics tools to investigate the dynamics of markets, prices, trades and wealth distribution. We studied the evolution of market dynamics in different stages of historical development by analyzing commodity prices from two distinct periods ancient Babylon, and medieval and early modern England. We find that the first-digit distributions of both Babylon and England commodity prices follow Benfords law, indicating that the data represent empirical observations typically arising from a free market. Further, we find that the normalized prices of both Babylon and England agricultural commodities are characterized by stretched exponential distributions, and exhibit persistent correlations of a power law type over long periods of up to several centuries, in contrast to contemporary markets. Our findings suggest that similar market interactions may underlie the dynamics of ancient agricultural commodity prices, and that these interactions may remain stable across centuries. To further investigate the dynamics of markets we present the analogy between transfers of money between individuals and the transfer of energy through particle collisions by means of the kinetic theory of gases. We introduce a theoretical framework of how the micro rules of trading lead to the emergence of income and wealth distribution. Particularly, we study the effects of different types of distribution of savings/investments among individuals in a society and different welfare/subsidies redistribution policies. Results show that while considering savings propensities the models approach empirical distributions of wealth quite well the effect of redistribution better captures specific features of the distributions which earlier models failed to do; moreover the models still preserve the exponential decay observed in empirical income distributions reported by tax data and surveys.

  15. Frequency Distribution of Seismic Intensity in Japan between 1950 and 2009

    NASA Astrophysics Data System (ADS)

    Kato, M.; Kohayakawa, Y.

    2012-12-01

    JMA Seismic Intensity is an index of seismic ground motion which is frequently used and reported in the media. While it is always difficult to represent complex ground motion with one index, the fact that it is widely accepted in the society makes the use of JMA Seismic Intensity preferable when seismologists communicate with the public and discuss hazard assessment and risk management. With the introduction on JMA Instrumental Intensity in 1996, the number of seismic intensity observation sites has substantially increased and the spatial coverage has improved vastly. Together with a long history of non-instrumental intensity records, the intensity data represent some aspects of the seismic ground motion in Japan. We investigate characteristics of seismic ground motion between 1950 and 2009 utilizing JMA Seismic Intensity Database. Specifically we are interested in the frequency distribution of intensity recordings. Observations of large intensity is rare compared to those of small intensity, and previous studies such as Ikegami [1961] demonstrated that frequency distribution of observed intensity obeys an exponential law, which is equivalent to the Ishimoto-Iida law [Ishimoto & Iida, 1939]. Such behavior could be used to empirically construct probabilistic seismic hazard maps [e.g., Kawasumi, 1951]. For the recent instrumental intensity data as well as pre-instrumental data, we are able to confirm that Ishimoto-Iida law explains the observation. Exponents of the Ishimoto-Iida law, or slope of the exponential law in the semi-log plot, is approximately 0.5. At stations with long recordings, there is no apparent difference between pre-instrumental and instrumental intensities when Ishimoto-Iida law is used as a measure. Numbers of average intensity reports per year and exponents of the frequency distribution curve vary regionally and local seismicity is apparently the controlling factor. The observed numbers of large intensity is slightly less than extrapolated and predicted from those of small intensity assuming the exponential relation.

  16. Doubly differential cross sections for galactic heavy-ion fragmentation

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Norbury, John W.; Khandelwal, Govind S.; Townsend, Lawrence W.

    1987-01-01

    An abrasion-ablation T-matrix formulation is applied to the calculation of double differential-cross sections in projectile fragmentation of 2.1 GeV/nucleon O-16 on Be-9 and 86 MeV/nucleon C-12 on C-12 and Ag-108. An exponential parameterization of the ablation T-matrix is used and the total width of the intermediate states is taken as a parameter. Fitted values of the total width to experimental results are used to predict the lifetime of the ablation stage and indicate a decay time on the order of 10 to the -19th power sec.

  17. Wigner functions for evanescent waves.

    PubMed

    Petruccelli, Jonathan C; Tian, Lei; Oh, Se Baek; Barbastathis, George

    2012-09-01

    We propose phase space distributions, based on an extension of the Wigner distribution function, to describe fields of any state of coherence that contain evanescent components emitted into a half-space. The evanescent components of the field are described in an optical phase space of spatial position and complex-valued angle. Behavior of these distributions upon propagation is also considered, where the rapid decay of the evanescent components is associated with the exponential decay of the associated phase space distributions. To demonstrate the structure and behavior of these distributions, we consider the fields generated from total internal reflection of a Gaussian Schell-model beam at a planar interface.

  18. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity

    PubMed Central

    Englehardt, James D.

    2015-01-01

    Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263

  19. Comment on Pisarenko et al., "Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory"

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2016-02-01

    In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.

  20. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Power laws in citation distributions: evidence from Scopus.

    PubMed

    Brzezinski, Michal

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.

  2. Polynomial Similarity Transformation Theory: A smooth interpolation between coupled cluster doubles and projected BCS applied to the reduced BCS Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degroote, M.; Henderson, T. M.; Zhao, J.

    We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less

  3. Laser-induced periodic surface structures on 6H-SiC single crystals using temporally delayed femtosecond laser double-pulse trains

    NASA Astrophysics Data System (ADS)

    Song, Juan; Tao, Wenjun; Song, Hui; Gong, Min; Ma, Guohong; Dai, Ye; Zhao, Quanzhong; Qiu, Jianrong

    2016-04-01

    In this paper, a time-delay-adjustable double-pulse train with 800-nm wavelength, 200-fs pulse duration and a repetition rate of 1 kHz, produced by a collinear two-beam optical system like a Mach-Zehnder interferometer, was employed for irradiation of 6H-SiC crystal. The dependence of the induced structures on time delay of double-pulse train for parallel-polarization configuration was studied. The results show that as the time delay of collinear parallel-polarization dual-pulse train increased, the induced near-subwavelength ripples (NSWRs) turn from irregular rippled pattern to regularly periodic pattern and have their grooves much deepened. The characteristics timescale for this transition is about 6.24 ps. Besides, the areas of NSWR were found to decay exponentially for time delay from 0 to 1.24 ps and then slowly increase for time delay from 1.24 to 14.24 ps. Analysis shows that multiphoton ionization effect, grating-assisted surface plasmon coupling effect, and timely intervene of second pulse in a certain physical stage experienced by 6H-SiC excited upon first pulse irradiation may contribute to the transition of morphology details.

  4. Generating functions for weighted Hurwitz numbers

    NASA Astrophysics Data System (ADS)

    Guay-Paquet, Mathieu; Harnad, J.

    2017-08-01

    Double Hurwitz numbers enumerating weighted n-sheeted branched coverings of the Riemann sphere or, equivalently, weighted paths in the Cayley graph of Sn generated by transpositions are determined by an associated weight generating function. A uniquely determined 1-parameter family of 2D Toda τ -functions of hypergeometric type is shown to consist of generating functions for such weighted Hurwitz numbers. Four classical cases are detailed, in which the weighting is uniform: Okounkov's double Hurwitz numbers for which the ramification is simple at all but two specified branch points; the case of Belyi curves, with three branch points, two with specified profiles; the general case, with a specified number of branch points, two with fixed profiles, the rest constrained only by the genus; and the signed enumeration case, with sign determined by the parity of the number of branch points. Using the exponentiated quantum dilogarithm function as a weight generator, three new types of weighted enumerations are introduced. These determine quantum Hurwitz numbers depending on a deformation parameter q. By suitable interpretation of q, the statistical mechanics of quantum weighted branched covers may be related to that of Bosonic gases. The standard double Hurwitz numbers are recovered in the classical limit.

  5. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    PubMed

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  6. Numerical study of MHD nanofluid flow and heat transfer past a bidirectional exponentially stretching sheet

    NASA Astrophysics Data System (ADS)

    Ahmad, Rida; Mustafa, M.; Hayat, T.; Alsaedi, A.

    2016-06-01

    Recent advancements in nanotechnology have led to the discovery of new generation coolants known as nanofluids. Nanofluids possess novel and unique characteristics which are fruitful in numerous cooling applications. Current work is undertaken to address the heat transfer in MHD three-dimensional flow of magnetic nanofluid (ferrofluid) over a bidirectional exponentially stretching sheet. The base fluid is considered as water which consists of magnetite-Fe3O4 nanoparticles. Exponentially varying surface temperature distribution is accounted. Problem formulation is presented through the Maxwell models for effective electrical conductivity and effective thermal conductivity of nanofluid. Similarity transformations give rise to a coupled non-linear differential system which is solved numerically. Appreciable growth in the convective heat transfer coefficient is observed when nanoparticle volume fraction is augmented. Temperature exponent parameter serves to enhance the heat transfer from the surface. Moreover the skin friction coefficient is directly proportional to both magnetic field strength and nanoparticle volume fraction.

  7. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  8. Income dynamics with a stationary double Pareto distribution.

    PubMed

    Toda, Alexis Akira

    2011-04-01

    Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways--(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model--and obtain the same value of 8.4.

  9. Single-channel activations and concentration jumps: comparison of recombinant NR1a/NR2A and NR1a/NR2D NMDA receptors

    PubMed Central

    Wyllie, David J A; Béhé, Philippe; Colquhoun, David

    1998-01-01

    We have expressed recombinant NR1a/NR2A and NR1a/NR2D N-methyl-D-aspartate (NMDA) receptor channels in Xenopus oocytes and made recordings of single-channel and macroscopic currents in outside-out membrane patches. For each receptor type we measured (a) the individual single-channel activations evoked by low glutamate concentrations in steady-state recordings, and (b) the macroscopic responses elicited by brief concentration jumps with high agonist concentrations, and we explore the relationship between these two sorts of observation. Low concentration (5–100 nM) steady-state recordings of NR1a/NR2A and NR1a/NR2D single-channel activity generated shut-time distributions that were best fitted with a mixture of five and six exponential components, respectively. Individual activations of either receptor type were resolved as bursts of openings, which we refer to as ‘super-clusters’. During a single activation, NR1a/NR2A receptors were open for 36 % of the time, but NR1a/NR2D receptors were open for only 4 % of the time. For both, distributions of super-cluster durations were best fitted with a mixture of six exponential components. Their overall mean durations were 35.8 and 1602 ms, respectively. Steady-state super-clusters were aligned on their first openings and averaged. The average was well fitted by a sum of exponentials with time constants taken from fits to super-cluster length distributions. It is shown that this is what would be expected for a channel that shows simple Markovian behaviour. The current through NR1a/NR2A channels following a concentration jump from zero to 1 mM glutamate for 1 ms was well fitted by three exponential components with time constants of 13 ms (rising phase), 70 ms and 350 ms (decaying phase). Similar concentration jumps on NR1a/NR2D channels were well fitted by two exponentials with means of 45 ms (rising phase) and 4408 ms (decaying phase) components. During prolonged exposure to glutamate, NR1a/NR2A channels desensitized with a time constant of 649 ms, while NR1a/NR2D channels exhibited no apparent desensitization. We show that under certain conditions, the time constants for the macroscopic jump response should be the same as those for the distribution of super-cluster lengths, though the resolution of the latter is so much greater that it cannot be expected that all the components will be resolvable in a macroscopic current. Good agreement was found for jumps on NR1a/NR2D receptors, and for some jump experiments on NR1a/NR2A. However, the latter were rather variable and some were slower than predicted. Slow decays were associated with patches that had large currents. PMID:9625862

  10. Modified Back Contact Interface of CZTSe Thin Film Solar Cells: Elimination of Double Layer Distribution in Absorber Layer.

    PubMed

    Zhang, Zhaojing; Yao, Liyong; Zhang, Yi; Ao, Jianping; Bi, Jinlian; Gao, Shoushuai; Gao, Qing; Jeng, Ming-Jer; Sun, Guozhong; Zhou, Zhiqiang; He, Qing; Sun, Yun

    2018-02-01

    Double layer distribution exists in Cu 2 SnZnSe 4 (CZTSe) thin films prepared by selenizing the metallic precursors, which will degrade the back contact of Mo substrate to absorber layer and thus suppressing the performance of solar cell. In this work, the double-layer distribution of CZTSe film is eliminated entirely and the formation of MoSe 2 interfacial layer is inhibited successfully. CZTSe film is prepared by selenizing the precursor deposited by electrodeposition method under Se and SnSe x mixed atmosphere. It is found that the insufficient reaction between ZnSe and Cu-Sn-Se phases in the bottom of the film is the reason why the double layer distribution of CZTSe film is formed. By increasing Sn content in the metallic precursor, thus making up the loss of Sn because of the decomposition of CZTSe and facilitate the diffusion of liquid Cu 2 Se, the double layer distribution is eliminated entirely. The crystallization of the formed thin film is dense and the grains go through the entire film without voids. And there is no obvious MoSe 2 layer formed between CZTSe and Mo. As a consequence, the series resistance of the solar cell reduces significantly to 0.14 Ω cm 2 and a CZTSe solar cell with efficiency of 7.2% is fabricated.

  11. None of the Above

    ERIC Educational Resources Information Center

    Ray, Mark

    2013-01-01

    The exponential influx of digital content and mobile devices into schools begs for school librarians to engage in discussions and decision making about the selection, classification, management, and distribution of content ranging from e-books to open educational resources. As information professionals, school librarians should channel their inner…

  12. Inhomogeneous growth of fluctuations of concentration of inertial particles in channel turbulence

    NASA Astrophysics Data System (ADS)

    Fouxon, Itzhak; Schmidt, Lukas; Ditlevsen, Peter; van Reeuwijk, Maarten; Holzner, Markus

    2018-06-01

    We study the growth of concentration fluctuations of weakly inertial particles in the turbulent channel flow starting with a smooth initial distribution. The steady-state concentration is singular and multifractal so the growth describes the increasingly rugged structure of the distribution. We demonstrate that inhomogeneity influences the growth of concentration fluctuations profoundly. For homogeneous turbulence the growth is exponential and is fully determined by Kolmogorov scale eddies.We derive lognormality of the statistics in this case. The growth exponents of the moments are proportional to the sum of Lyapunov exponents, which is quadratic in the small inertia of the particles. In contrast, for inhomogeneous turbulence the growth is linear in inertia. It involves correlations of inertial range and viscous scale eddies that turn the growth into a stretched exponential law with exponent three halves. We demonstrate using direct numerical simulations that the resulting growth rate can differ by orders of magnitude over channel height. This strong variation might have relevance in the planetary boundary layer.

  13. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  14. The exponential rise of induced seismicity with increasing stress levels in the Groningen gas field and its implications for controlling seismic risk

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.; van Elk, J.

    2018-06-01

    Induced seismicity typically arises from the progressive activation of recently inactive geological faults by anthropogenic activity. Faults are mechanically and geometrically heterogeneous, so their extremes of stress and strength govern the initial evolution of induced seismicity. We derive a statistical model of Coulomb stress failures and associated aftershocks within the tail of the distribution of fault stress and strength variations to show initial induced seismicity rates will increase as an exponential function of induced stress. Our model provides operational forecasts consistent with the observed space-time-magnitude distribution of earthquakes induced by gas production from the Groningen field in the Netherlands. These probabilistic forecasts also match the observed changes in seismicity following a significant and sustained decrease in gas production rates designed to reduce seismic hazard and risk. This forecast capability allows reliable assessment of alternative control options to better inform future induced seismic risk management decisions.

  15. Controllable excitation of higher-order rogue waves in nonautonomous systems with both varying linear and harmonic external potentials

    NASA Astrophysics Data System (ADS)

    Jia, Heping; Yang, Rongcao; Tian, Jinping; Zhang, Wenmei

    2018-05-01

    The nonautonomous nonlinear Schrödinger (NLS) equation with both varying linear and harmonic external potentials is investigated and the semirational rogue wave (RW) solution is presented by similarity transformation. Based on the solution, the interactions between Peregrine soliton and breathers, and the controllability of the semirational RWs in periodic distribution and exponential decreasing nonautonomous systems with both linear and harmonic potentials are studied. It is found that the harmonic potential only influences the constraint condition of the semirational solution, the linear potential is related to the trajectory of the semirational RWs, while dispersion and nonlinearity determine the excitation position of the higher-order RWs. The higher-order RWs can be partly, completely and biperiodically excited in periodic distribution system and the diverse excited patterns can be generated for different parameter relations in exponential decreasing system. The results reveal that the excitation of the higher-order RWs can be controlled in the nonautonomous system by choosing dispersion, nonlinearity and external potentials.

  16. Comparison of Traditional and Open-Access Appointment Scheduling for Exponentially Distributed Service Time.

    PubMed

    Yan, Chongjun; Tang, Jiafu; Jiang, Bowen; Fung, Richard Y K

    2015-01-01

    This paper compares the performance measures of traditional appointment scheduling (AS) with those of an open-access appointment scheduling (OA-AS) system with exponentially distributed service time. A queueing model is formulated for the traditional AS system with no-show probability. The OA-AS models assume that all patients who call before the session begins will show up for the appointment on time. Two types of OA-AS systems are considered: with a same-session policy and with a same-or-next-session policy. Numerical results indicate that the superiority of OA-AS systems is not as obvious as those under deterministic scenarios. The same-session system has a threshold of relative waiting cost, after which the traditional system always has higher total costs, and the same-or-next-session system is always preferable, except when the no-show probability or the weight of patients' waiting is low. It is concluded that open-access policies can be viewed as alternative approaches to mitigate the negative effects of no-show patients.

  17. Traction forces during collective cell motion.

    PubMed

    Gov, N S

    2009-08-01

    Collective motion of cell cultures is a process of great interest, as it occurs during morphogenesis, wound healing, and tumor metastasis. During these processes cell cultures move due to the traction forces induced by the individual cells on the surrounding matrix. A recent study [Trepat, et al. (2009). Nat. Phys. 5, 426-430] measured for the first time the traction forces driving collective cell migration and found that they arise throughout the cell culture. The leading 5-10 rows of cell do play a major role in directing the motion of the rest of the culture by having a distinct outwards traction. Fluctuations in the traction forces are an order of magnitude larger than the resultant directional traction at the culture edge and, furthermore, have an exponential distribution. Such exponential distributions are observed for the sizes of adhesion domains within cells, the traction forces produced by single cells, and even in nonbiological nonequilibrium systems, such as sheared granular materials. We discuss these observations and their implications for our understanding of cellular flows within a continuous culture.

  18. Inclusive transverse momentum distributions of charged particles in diffractive and non-diffractive photoproduction at HERA

    NASA Astrophysics Data System (ADS)

    Derrick, M.; Krakauer, D.; Magill, S.; Mikunas, D.; Musgrave, B.; Repond, J.; Stanek, R.; Talaga, R. L.; Zhang, H.; Ayad, R.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, A.; Bruni, G.; Bruni, P.; Romeo, G. Cara; Castellini, G.; Chiarini, M.; Cifarelli, L.; Cindolo, F.; Contin, A.; Corradi, M.; Gialas, I.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Margotti, A.; Massam, T.; Nania, R.; Nemoz, C.; Palmonari, F.; Polini, A.; Sartorelli, G.; Timellini, R.; Garcia, Y. Zamora; Zichichi, A.; Bargende, A.; Crittenden, J.; Desch, K.; Diekmann, B.; Doeker, T.; Eckert, M.; Feld, L.; Frey, A.; Geerts, M.; Geitz, G.; Grothe, M.; Haas, T.; Hartmann, H.; Haun, D.; Heinloth, K.; Hilger, E.; Jakob, H.-P.; Katz, U. F.; Mari, S. M.; Mass, A.; Mengel, S.; Mollen, J.; Paul, E.; Rembser, Ch.; Schattevoy, R.; Schramm, D.; Stamm, J.; Wedemeyer, R.; Campbell-Robson, S.; Cassidy, A.; Dyce, N.; Foster, B.; George, S.; Gilmore, R.; Heath, G. P.; Heath, H. F.; Llewellyn, T. J.; Morgado, C. J. S.; Norman, D. J. P.; O'Mara, J. A.; Tapper, R. J.; Wilson, S. S.; Yoshida, R.; Rau, R. R.; Arneodo, M.; Iannotti, L.; Schioppa, M.; Susinno, G.; Bernstein, A.; Caldwell, A.; Cartiglia, N.; Parsons, J. A.; Ritz, S.; Sciulli, F.; Straub, P. B.; Wai, L.; Yang, S.; Zhu, Q.; Borzemski, P.; Chwastowski, J.; Eskreys, A.; Piotrzkowski, K.; Zachara, M.; Zawiejski, L.; Adamczyk, L.; Bednarek, B.; Jeleń, K.; Kisielewska, D.; Kowalski, T.; Rulikowska-Zarębska, E.; Suszycki, L.; Zając, J.; Kotański, A.; Przybycień, M.; Bauerdick, L. A. T.; Behrens, U.; Beier, H.; Bienlein, J. K.; Coldewey, C.; Deppe, O.; Desler, K.; Drews, G.; Flasiński, M.; Gilkinson, D. J.; Glasman, C.; Göttlicher, P.; Große-Knetter, J.; Gutjahr, B.; Hain, W.; Hasell, D.; Heßling, H.; Iga, Y.; Joos, P.; Kasemann, M.; Klanner, R.; Koch, W.; Köpke, L.; Kötz, U.; Kowalski, H.; Labs, L.; Ladage, A.; Löhr, B.; Löwe, M.; Lüke, D.; Mańczak, O.; Monteiro, T.; Ng, J. S. T.; Nickel, S.; Notz, D.; Ohrenberg, K.; Roco, M.; Rohde, M.; Roldán, J.; Schneekloth, U.; Schulz, W.; Selonke, F.; Stiliaris, E.; Surrow, B.; Voß, T.; Westphal, D.; Wolf, G.; Youngman, C.; Zhou, J. F.; Grabosch, H. J.; Kharchilava, A.; Leich, A.; Mattingly, M. C. K.; Meyer, A.; Schlenstedt, S.; Wulff, N.; Barbagli, G.; Pelfer, P.; Anzivino, G.; Maccarrone, G.; de Pasquale, S.; Votano, L.; Bamberger, A.; Eisenhardt, S.; Freidhof, A.; Söldner-Rembold, S.; Schroeder, J.; Trefzger, T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; Fleck, J. I.; Saxon, D. H.; Utley, M. L.; Wilson, A. S.; Dannemann, A.; Holm, U.; Horstmann, D.; Neumann, T.; Sinkus, R.; Wick, K.; Badura, E.; Burow, B. D.; Hagge, L.; Lohrmann, E.; Mainusch, J.; Milewski, J.; Nakahata, M.; Pavel, N.; Poelz, G.; Schott, W.; Zetsche, F.; Bacon, T. C.; Butterworth, I.; Gallo, E.; Harris, V. L.; Hung, B. Y. H.; Long, K. R.; Miller, D. B.; Morawitz, P. P. O.; Prinias, A.; Sedgbeer, J. K.; Whitfield, A. F.; Mallik, U.; McCliment, E.; Wang, M. Z.; Wang, S. M.; Wu, J. T.; Zhang, Y.; Cloth, P.; Filges, D.; An, S. H.; Hong, S. M.; Nam, S. W.; Park, S. K.; Suh, M. H.; Yon, S. H.; Imlay, R.; Kartik, S.; Kim, H.-J.; McNeil, R. R.; Metcalf, W.; Nadendla, V. K.; Barreiro, F.; Cases, G.; Graciani, R.; Hernández, J. M.; Hervás, L.; Labarga, L.; Del Peso, J.; Puga, J.; Terron, J.; de Trocóniz, J. F.; Smith, G. R.; Corriveau, F.; Hanna, D. S.; Hartmann, J.; Hung, L. W.; Lim, J. N.; Matthews, C. G.; Patel, P. M.; Sinclair, L. E.; Stairs, D. G.; St. Laurent, M.; Ullmann, R.; Zacek, G.; Bashkirov, V.; Dolgoshein, B. A.; Stifutkin, A.; Bashindzhagyan, G. L.; Ermolov, P. F.; Gladilin, L. K.; Golubkov, Y. A.; Kobrin, V. D.; Kuzmin, V. A.; Proskuryakov, A. S.; Savin, A. A.; Shcheglova, L. M.; Solomin, A. N.; Zotov, N. P.; Botje, M.; Chlebana, F.; Dake, A.; Engelen, J.; de Kamps, M.; Kooijman, P.; Kruse, A.; Tiecke, H.; Verkerke, W.; Vreeswijk, M.; Wiggers, L.; de Wolf, E.; van Woudenberg, R.; Acosta, D.; Bylsma, B.; Durkin, L. S.; Honscheid, K.; Li, C.; Ling, T. Y.; McLean, K. W.; Murray, W. N.; Park, I. H.; Romanowski, T. A.; Seidlein, R.; Bailey, D. S.; Blair, G. A.; Byrne, A.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Daniels, D.; Devenish, R. C. E.; Harnew, N.; Lancaster, M.; Luffman, P. E.; Lindemann, L.; McFall, J. D.; Nath, C.; Noyes, V. A.; Quadt, A.; Uijterwaal, H.; Walczak, R.; Wilson, F. F.; Yip, T.; Abbiendi, G.; Bertolin, A.; Brugnera, R.; Carlin, R.; Dal Corso, F.; de Giorgi, M.; Dosselli, U.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Bulmahn, J.; Butterworth, J. M.; Feild, R. G.; Oh, B. Y.; Whitmore, J. J.; D'Agostini, G.; Marini, G.; Nigro, A.; Tassi, E.; Hart, J. C.; McCubbin, N. A.; Prytz, K.; Shah, T. P.; Short, T. L.; Barberis, E.; Dubbs, T.; Heusch, C.; van Hook, M.; Hubbard, B.; Lockman, W.; Rahn, J. T.; Sadrozinski, H. F.-W.; Seiden, A.; Biltzinger, J.; Seifert, R. J.; Schwarzer, O.; Walenta, A. H.; Zech, G.; Abramowicz, H.; Briskin, G.; Dagan, S.; Levy, A.; Hasegawa, T.; Hazumi, M.; Ishii, T.; Kuze, M.; Mine, S.; Nagasawa, Y.; Nakao, M.; Suzuki, I.; Tokushuku, K.; Yamada, S.; Yamazaki, Y.; Chiba, M.; Hamatsu, R.; Hirose, T.; Homma, K.; Kitamura, S.; Nakamitsu, Y.; Yamauchi, K.; Cirio, R.; Costa, M.; Ferrero, M. I.; Lamberti, L.; Maselli, S.; Peroni, C.; Sacchi, R.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Bandyopadhyay, D.; Benard, F.; Brkic, M.; Crombie, M. B.; Gingrich, D. M.; Hartner, G. F.; Joo, K. K.; Levman, G. M.; Martin, J. F.; Orr, R. S.; Sampson, C. R.; Teuscher, R. J.; Catterall, C. D.; Jones, T. W.; Kaziewicz, P. B.; Lane, J. B.; Saunders, R. L.; Shulman, J.; Blankenship, K.; Lu, B.; Mo, L. W.; Bogusz, W.; Charchula, K.; Ciborowski, J.; Gajewski, J.; Grzelak, G.; Kasprzak, M.; Krzyżanowski, M.; Muchorowski, K.; Nowak, R. J.; Pawlak, J. M.; Tymieniecka, T.; Wróblewski, A. K.; Zakrzewski, J. A.; Żarnecki, A. F.; Adamus, M.; Eisenberg, Y.; Karshon, U.; Revel, D.; Zer-Zion, D.; Ali, I.; Badgett, W. F.; Behrens, B.; Dasu, S.; Fordham, C.; Foudas, C.; Goussiou, A.; Loveless, R. J.; Reeder, D. D.; Silverstein, S.; Smith, W. H.; Vaiciulis, A.; Wodarczyk, M.; Tsurugai, T.; Bhadra, S.; Cardy, M. L.; Fagerstroem, C.-P.; Frisken, W. R.; Furutani, K. M.; Khakzad, M.; Schmidke, W. B.

    1995-06-01

    Inclusive transverse momentum spectra of charged particles in photoproduction events in the laboratory pseudorapidity range -1.2<η<1.4 have been measured up to p T =8 GeV usign the ZEUS detector. Diffractive and non-diffractive reactions have been selected with an average γ p centre of mass (c.m.) energy of < W>=180 GeV. For diffractive reactions, the p T spectra of the photon dissociation events have been measured in two intervals of the dissociated photon mass with mean values < M X >=5 GeV and 10 GeV. The inclusive transverse momentum spectra fall exponentially in the low p T region. The non-diffractive data show a pronounced high p T tail departing from the exponential shape. The p T distributions are compared to lower energy photoproduction data and to hadron-hadron collisions at a similar c.m. energy. The data are also compared to the results of a next-to-leading order QCD calculation.

  19. Work fluctuations for Bose particles in grand canonical initial states.

    PubMed

    Yi, Juyeon; Kim, Yong Woon; Talkner, Peter

    2012-05-01

    We consider bosons in a harmonic trap and investigate the fluctuations of the work performed by an adiabatic change of the trap curvature. Depending on the reservoir conditions such as temperature and chemical potential that provide the initial equilibrium state, the exponentiated work average (EWA) defined in the context of the Crooks relation and the Jarzynski equality may diverge if the trap becomes wider. We investigate how the probability distribution function (PDF) of the work signals this divergence. It is shown that at low temperatures the PDF is highly asymmetric with a steep fall-off at one side and an exponential tail at the other side. For high temperatures it is closer to a symmetric distribution approaching a Gaussian form. These properties of the work PDF are discussed in relation to the convergence of the EWA and to the existence of the hypothetical equilibrium state to which those thermodynamic potential changes refer that enter both the Crooks relation and the Jarzynski equality.

  20. A computer program for thermal radiation from gaseous rocket exhuast plumes (GASRAD)

    NASA Technical Reports Server (NTRS)

    Reardon, J. E.; Lee, Y. C.

    1979-01-01

    A computer code is presented for predicting incident thermal radiation from defined plume gas properties in either axisymmetric or cylindrical coordinate systems. The radiation model is a statistical band model for exponential line strength distribution with Lorentz/Doppler line shapes for 5 gaseous species (H2O, CO2, CO, HCl and HF) and an appoximate (non-scattering) treatment of carbon particles. The Curtis-Godson approximation is used for inhomogeneous gases, but a subroutine is available for using Young's intuitive derivative method for H2O with Lorentz line shape and exponentially-tailed-inverse line strength distribution. The geometry model provides integration over a hemisphere with up to 6 individually oriented identical axisymmetric plumes, a single 3-D plume, Shading surfaces may be used in any of 7 shapes, and a conical limit may be defined for the plume to set individual line-of-signt limits. Intermediate coordinate systems may specified to simplify input of plumes and shading surfaces.

Top